Test Report: KVM_Linux_crio 18929

                    
                      b7c7f6c35857e0c10d9dae71da379568bba5603f:2024-05-20:34549
                    
                

Test fail (16/221)

x
+
TestAddons/parallel/Ingress (154.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-840762 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-840762 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-840762 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [efadd51f-18e9-48cb-bc58-103881fd9263] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [efadd51f-18e9-48cb-bc58-103881fd9263] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.00461857s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-840762 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-840762 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.258923594s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-840762 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-840762 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.19
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-840762 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-840762 addons disable ingress-dns --alsologtostderr -v=1: (1.437565845s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-840762 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-840762 addons disable ingress --alsologtostderr -v=1: (8.054951509s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-840762 -n addons-840762
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-840762 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-840762 logs -n 25: (1.3152716s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-600768 | jenkins | v1.33.1 | 20 May 24 12:54 UTC |                     |
	|         | -p download-only-600768                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| delete  | -p download-only-600768                                                                     | download-only-600768 | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| delete  | -p download-only-562366                                                                     | download-only-562366 | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| delete  | -p download-only-600768                                                                     | download-only-600768 | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-910817 | jenkins | v1.33.1 | 20 May 24 12:54 UTC |                     |
	|         | binary-mirror-910817                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44813                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-910817                                                                     | binary-mirror-910817 | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| addons  | enable dashboard -p                                                                         | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:54 UTC |                     |
	|         | addons-840762                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:54 UTC |                     |
	|         | addons-840762                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-840762 --wait=true                                                                | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:58 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | addons-840762                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | -p addons-840762                                                                            |                      |         |         |                     |                     |
	| ip      | addons-840762 ip                                                                            | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	| addons  | addons-840762 addons disable                                                                | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC |                     |
	|         | addons-840762                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | -p addons-840762                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-840762 ssh cat                                                                       | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | /opt/local-path-provisioner/pvc-ef6f8a93-1567-44f6-8095-fb964ae1388e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-840762 addons disable                                                                | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-840762 addons                                                                        | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-840762 addons                                                                        | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-840762 ssh curl -s                                                                   | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-840762 addons disable                                                                | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-840762 ip                                                                            | addons-840762        | jenkins | v1.33.1 | 20 May 24 13:01 UTC | 20 May 24 13:01 UTC |
	| addons  | addons-840762 addons disable                                                                | addons-840762        | jenkins | v1.33.1 | 20 May 24 13:01 UTC | 20 May 24 13:01 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-840762 addons disable                                                                | addons-840762        | jenkins | v1.33.1 | 20 May 24 13:01 UTC | 20 May 24 13:01 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 12:54:50
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 12:54:50.749933  610501 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:54:50.750199  610501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:54:50.750209  610501 out.go:304] Setting ErrFile to fd 2...
	I0520 12:54:50.750213  610501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:54:50.750399  610501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 12:54:50.750992  610501 out.go:298] Setting JSON to false
	I0520 12:54:50.751872  610501 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9431,"bootTime":1716200260,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 12:54:50.751931  610501 start.go:139] virtualization: kvm guest
	I0520 12:54:50.754672  610501 out.go:177] * [addons-840762] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 12:54:50.756981  610501 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 12:54:50.759177  610501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 12:54:50.756934  610501 notify.go:220] Checking for updates...
	I0520 12:54:50.761478  610501 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 12:54:50.763622  610501 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 12:54:50.765719  610501 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 12:54:50.767722  610501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 12:54:50.769950  610501 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 12:54:50.803102  610501 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 12:54:50.805399  610501 start.go:297] selected driver: kvm2
	I0520 12:54:50.805434  610501 start.go:901] validating driver "kvm2" against <nil>
	I0520 12:54:50.805454  610501 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 12:54:50.806441  610501 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:54:50.806556  610501 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 12:54:50.822923  610501 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 12:54:50.822988  610501 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 12:54:50.823216  610501 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:54:50.823247  610501 cni.go:84] Creating CNI manager for ""
	I0520 12:54:50.823257  610501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 12:54:50.823270  610501 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 12:54:50.823335  610501 start.go:340] cluster config:
	{Name:addons-840762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-840762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:54:50.823464  610501 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:54:50.827202  610501 out.go:177] * Starting "addons-840762" primary control-plane node in "addons-840762" cluster
	I0520 12:54:50.829149  610501 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:54:50.829183  610501 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 12:54:50.829194  610501 cache.go:56] Caching tarball of preloaded images
	I0520 12:54:50.829274  610501 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 12:54:50.829286  610501 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 12:54:50.829591  610501 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/config.json ...
	I0520 12:54:50.829616  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/config.json: {Name:mk1bcc97b7c3196011ae8aa65e58032d87fa57bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:54:50.829771  610501 start.go:360] acquireMachinesLock for addons-840762: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 12:54:50.829815  610501 start.go:364] duration metric: took 31.227µs to acquireMachinesLock for "addons-840762"
	I0520 12:54:50.829832  610501 start.go:93] Provisioning new machine with config: &{Name:addons-840762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-840762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:54:50.829901  610501 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 12:54:50.832368  610501 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0520 12:54:50.832505  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:54:50.832552  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:54:50.847327  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I0520 12:54:50.847765  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:54:50.848420  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:54:50.848446  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:54:50.848806  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:54:50.849047  610501 main.go:141] libmachine: (addons-840762) Calling .GetMachineName
	I0520 12:54:50.849193  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:54:50.849375  610501 start.go:159] libmachine.API.Create for "addons-840762" (driver="kvm2")
	I0520 12:54:50.849403  610501 client.go:168] LocalClient.Create starting
	I0520 12:54:50.849451  610501 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem
	I0520 12:54:50.991473  610501 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem
	I0520 12:54:51.176622  610501 main.go:141] libmachine: Running pre-create checks...
	I0520 12:54:51.176652  610501 main.go:141] libmachine: (addons-840762) Calling .PreCreateCheck
	I0520 12:54:51.177212  610501 main.go:141] libmachine: (addons-840762) Calling .GetConfigRaw
	I0520 12:54:51.177703  610501 main.go:141] libmachine: Creating machine...
	I0520 12:54:51.177718  610501 main.go:141] libmachine: (addons-840762) Calling .Create
	I0520 12:54:51.177909  610501 main.go:141] libmachine: (addons-840762) Creating KVM machine...
	I0520 12:54:51.179266  610501 main.go:141] libmachine: (addons-840762) DBG | found existing default KVM network
	I0520 12:54:51.180081  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:51.179921  610539 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c10}
	I0520 12:54:51.180138  610501 main.go:141] libmachine: (addons-840762) DBG | created network xml: 
	I0520 12:54:51.180166  610501 main.go:141] libmachine: (addons-840762) DBG | <network>
	I0520 12:54:51.180178  610501 main.go:141] libmachine: (addons-840762) DBG |   <name>mk-addons-840762</name>
	I0520 12:54:51.180193  610501 main.go:141] libmachine: (addons-840762) DBG |   <dns enable='no'/>
	I0520 12:54:51.180204  610501 main.go:141] libmachine: (addons-840762) DBG |   
	I0520 12:54:51.180218  610501 main.go:141] libmachine: (addons-840762) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 12:54:51.180227  610501 main.go:141] libmachine: (addons-840762) DBG |     <dhcp>
	I0520 12:54:51.180235  610501 main.go:141] libmachine: (addons-840762) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 12:54:51.180247  610501 main.go:141] libmachine: (addons-840762) DBG |     </dhcp>
	I0520 12:54:51.180255  610501 main.go:141] libmachine: (addons-840762) DBG |   </ip>
	I0520 12:54:51.180318  610501 main.go:141] libmachine: (addons-840762) DBG |   
	I0520 12:54:51.180349  610501 main.go:141] libmachine: (addons-840762) DBG | </network>
	I0520 12:54:51.180368  610501 main.go:141] libmachine: (addons-840762) DBG | 
	I0520 12:54:51.186377  610501 main.go:141] libmachine: (addons-840762) DBG | trying to create private KVM network mk-addons-840762 192.168.39.0/24...
	I0520 12:54:51.253528  610501 main.go:141] libmachine: (addons-840762) DBG | private KVM network mk-addons-840762 192.168.39.0/24 created
	I0520 12:54:51.253564  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:51.253446  610539 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 12:54:51.253577  610501 main.go:141] libmachine: (addons-840762) Setting up store path in /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762 ...
	I0520 12:54:51.253591  610501 main.go:141] libmachine: (addons-840762) Building disk image from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 12:54:51.253664  610501 main.go:141] libmachine: (addons-840762) Downloading /home/jenkins/minikube-integration/18929-602525/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 12:54:51.515102  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:51.514941  610539 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa...
	I0520 12:54:51.762036  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:51.761845  610539 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/addons-840762.rawdisk...
	I0520 12:54:51.762086  610501 main.go:141] libmachine: (addons-840762) DBG | Writing magic tar header
	I0520 12:54:51.762101  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762 (perms=drwx------)
	I0520 12:54:51.762118  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines (perms=drwxr-xr-x)
	I0520 12:54:51.762125  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube (perms=drwxr-xr-x)
	I0520 12:54:51.762131  610501 main.go:141] libmachine: (addons-840762) DBG | Writing SSH key tar header
	I0520 12:54:51.762141  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:51.761967  610539 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762 ...
	I0520 12:54:51.762151  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525 (perms=drwxrwxr-x)
	I0520 12:54:51.762163  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762
	I0520 12:54:51.762179  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 12:54:51.762201  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 12:54:51.762212  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines
	I0520 12:54:51.762223  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 12:54:51.762236  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525
	I0520 12:54:51.762248  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 12:54:51.762255  610501 main.go:141] libmachine: (addons-840762) Creating domain...
	I0520 12:54:51.762264  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins
	I0520 12:54:51.762277  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home
	I0520 12:54:51.762293  610501 main.go:141] libmachine: (addons-840762) DBG | Skipping /home - not owner
	I0520 12:54:51.763533  610501 main.go:141] libmachine: (addons-840762) define libvirt domain using xml: 
	I0520 12:54:51.763552  610501 main.go:141] libmachine: (addons-840762) <domain type='kvm'>
	I0520 12:54:51.763560  610501 main.go:141] libmachine: (addons-840762)   <name>addons-840762</name>
	I0520 12:54:51.763565  610501 main.go:141] libmachine: (addons-840762)   <memory unit='MiB'>4000</memory>
	I0520 12:54:51.763570  610501 main.go:141] libmachine: (addons-840762)   <vcpu>2</vcpu>
	I0520 12:54:51.763574  610501 main.go:141] libmachine: (addons-840762)   <features>
	I0520 12:54:51.763580  610501 main.go:141] libmachine: (addons-840762)     <acpi/>
	I0520 12:54:51.763586  610501 main.go:141] libmachine: (addons-840762)     <apic/>
	I0520 12:54:51.763593  610501 main.go:141] libmachine: (addons-840762)     <pae/>
	I0520 12:54:51.763604  610501 main.go:141] libmachine: (addons-840762)     
	I0520 12:54:51.763612  610501 main.go:141] libmachine: (addons-840762)   </features>
	I0520 12:54:51.763623  610501 main.go:141] libmachine: (addons-840762)   <cpu mode='host-passthrough'>
	I0520 12:54:51.763629  610501 main.go:141] libmachine: (addons-840762)   
	I0520 12:54:51.763646  610501 main.go:141] libmachine: (addons-840762)   </cpu>
	I0520 12:54:51.763655  610501 main.go:141] libmachine: (addons-840762)   <os>
	I0520 12:54:51.763660  610501 main.go:141] libmachine: (addons-840762)     <type>hvm</type>
	I0520 12:54:51.763665  610501 main.go:141] libmachine: (addons-840762)     <boot dev='cdrom'/>
	I0520 12:54:51.763669  610501 main.go:141] libmachine: (addons-840762)     <boot dev='hd'/>
	I0520 12:54:51.763678  610501 main.go:141] libmachine: (addons-840762)     <bootmenu enable='no'/>
	I0520 12:54:51.763688  610501 main.go:141] libmachine: (addons-840762)   </os>
	I0520 12:54:51.763701  610501 main.go:141] libmachine: (addons-840762)   <devices>
	I0520 12:54:51.763709  610501 main.go:141] libmachine: (addons-840762)     <disk type='file' device='cdrom'>
	I0520 12:54:51.763728  610501 main.go:141] libmachine: (addons-840762)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/boot2docker.iso'/>
	I0520 12:54:51.763746  610501 main.go:141] libmachine: (addons-840762)       <target dev='hdc' bus='scsi'/>
	I0520 12:54:51.763754  610501 main.go:141] libmachine: (addons-840762)       <readonly/>
	I0520 12:54:51.763758  610501 main.go:141] libmachine: (addons-840762)     </disk>
	I0520 12:54:51.763770  610501 main.go:141] libmachine: (addons-840762)     <disk type='file' device='disk'>
	I0520 12:54:51.763779  610501 main.go:141] libmachine: (addons-840762)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 12:54:51.763793  610501 main.go:141] libmachine: (addons-840762)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/addons-840762.rawdisk'/>
	I0520 12:54:51.763806  610501 main.go:141] libmachine: (addons-840762)       <target dev='hda' bus='virtio'/>
	I0520 12:54:51.763814  610501 main.go:141] libmachine: (addons-840762)     </disk>
	I0520 12:54:51.763826  610501 main.go:141] libmachine: (addons-840762)     <interface type='network'>
	I0520 12:54:51.763839  610501 main.go:141] libmachine: (addons-840762)       <source network='mk-addons-840762'/>
	I0520 12:54:51.763850  610501 main.go:141] libmachine: (addons-840762)       <model type='virtio'/>
	I0520 12:54:51.763859  610501 main.go:141] libmachine: (addons-840762)     </interface>
	I0520 12:54:51.763868  610501 main.go:141] libmachine: (addons-840762)     <interface type='network'>
	I0520 12:54:51.763874  610501 main.go:141] libmachine: (addons-840762)       <source network='default'/>
	I0520 12:54:51.763886  610501 main.go:141] libmachine: (addons-840762)       <model type='virtio'/>
	I0520 12:54:51.763898  610501 main.go:141] libmachine: (addons-840762)     </interface>
	I0520 12:54:51.763910  610501 main.go:141] libmachine: (addons-840762)     <serial type='pty'>
	I0520 12:54:51.763921  610501 main.go:141] libmachine: (addons-840762)       <target port='0'/>
	I0520 12:54:51.763931  610501 main.go:141] libmachine: (addons-840762)     </serial>
	I0520 12:54:51.763942  610501 main.go:141] libmachine: (addons-840762)     <console type='pty'>
	I0520 12:54:51.763953  610501 main.go:141] libmachine: (addons-840762)       <target type='serial' port='0'/>
	I0520 12:54:51.763964  610501 main.go:141] libmachine: (addons-840762)     </console>
	I0520 12:54:51.763972  610501 main.go:141] libmachine: (addons-840762)     <rng model='virtio'>
	I0520 12:54:51.763982  610501 main.go:141] libmachine: (addons-840762)       <backend model='random'>/dev/random</backend>
	I0520 12:54:51.763993  610501 main.go:141] libmachine: (addons-840762)     </rng>
	I0520 12:54:51.764002  610501 main.go:141] libmachine: (addons-840762)     
	I0520 12:54:51.764015  610501 main.go:141] libmachine: (addons-840762)     
	I0520 12:54:51.764028  610501 main.go:141] libmachine: (addons-840762)   </devices>
	I0520 12:54:51.764043  610501 main.go:141] libmachine: (addons-840762) </domain>
	I0520 12:54:51.764055  610501 main.go:141] libmachine: (addons-840762) 
	I0520 12:54:51.768989  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:fb:9f:32 in network default
	I0520 12:54:51.769612  610501 main.go:141] libmachine: (addons-840762) Ensuring networks are active...
	I0520 12:54:51.769643  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:51.770275  610501 main.go:141] libmachine: (addons-840762) Ensuring network default is active
	I0520 12:54:51.770537  610501 main.go:141] libmachine: (addons-840762) Ensuring network mk-addons-840762 is active
	I0520 12:54:51.770983  610501 main.go:141] libmachine: (addons-840762) Getting domain xml...
	I0520 12:54:51.771663  610501 main.go:141] libmachine: (addons-840762) Creating domain...
	I0520 12:54:52.966989  610501 main.go:141] libmachine: (addons-840762) Waiting to get IP...
	I0520 12:54:52.967844  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:52.968374  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:52.968400  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:52.968341  610539 retry.go:31] will retry after 245.330251ms: waiting for machine to come up
	I0520 12:54:53.215880  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:53.216390  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:53.216416  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:53.216352  610539 retry.go:31] will retry after 286.616472ms: waiting for machine to come up
	I0520 12:54:53.505129  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:53.505630  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:53.505658  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:53.505618  610539 retry.go:31] will retry after 312.787625ms: waiting for machine to come up
	I0520 12:54:53.820350  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:53.820828  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:53.820859  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:53.820772  610539 retry.go:31] will retry after 375.629067ms: waiting for machine to come up
	I0520 12:54:54.198230  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:54.198645  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:54.198678  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:54.198600  610539 retry.go:31] will retry after 558.50452ms: waiting for machine to come up
	I0520 12:54:54.758250  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:54.758836  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:54.758867  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:54.758777  610539 retry.go:31] will retry after 772.745392ms: waiting for machine to come up
	I0520 12:54:55.532754  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:55.533179  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:55.533205  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:55.533125  610539 retry.go:31] will retry after 1.015067234s: waiting for machine to come up
	I0520 12:54:56.549875  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:56.550336  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:56.550366  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:56.550270  610539 retry.go:31] will retry after 1.340438643s: waiting for machine to come up
	I0520 12:54:57.892757  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:57.893191  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:57.893226  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:57.893143  610539 retry.go:31] will retry after 1.779000898s: waiting for machine to come up
	I0520 12:54:59.674439  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:59.674849  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:59.674878  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:59.674795  610539 retry.go:31] will retry after 1.912219697s: waiting for machine to come up
	I0520 12:55:01.588719  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:01.589170  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:55:01.589211  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:55:01.589118  610539 retry.go:31] will retry after 2.779568547s: waiting for machine to come up
	I0520 12:55:04.372082  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:04.372519  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:55:04.372543  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:55:04.372481  610539 retry.go:31] will retry after 2.436821512s: waiting for machine to come up
	I0520 12:55:06.810430  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:06.810907  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:55:06.810932  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:55:06.810869  610539 retry.go:31] will retry after 4.499322165s: waiting for machine to come up
	I0520 12:55:11.311574  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.311986  610501 main.go:141] libmachine: (addons-840762) Found IP for machine: 192.168.39.19
	I0520 12:55:11.312007  610501 main.go:141] libmachine: (addons-840762) Reserving static IP address...
	I0520 12:55:11.312017  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has current primary IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.312416  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find host DHCP lease matching {name: "addons-840762", mac: "52:54:00:0f:4e:d2", ip: "192.168.39.19"} in network mk-addons-840762
	I0520 12:55:11.448691  610501 main.go:141] libmachine: (addons-840762) DBG | Getting to WaitForSSH function...
	I0520 12:55:11.448724  610501 main.go:141] libmachine: (addons-840762) Reserved static IP address: 192.168.39.19
	I0520 12:55:11.448738  610501 main.go:141] libmachine: (addons-840762) Waiting for SSH to be available...
	I0520 12:55:11.451103  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.451496  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:11.451530  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.451644  610501 main.go:141] libmachine: (addons-840762) DBG | Using SSH client type: external
	I0520 12:55:11.451668  610501 main.go:141] libmachine: (addons-840762) DBG | Using SSH private key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa (-rw-------)
	I0520 12:55:11.451710  610501 main.go:141] libmachine: (addons-840762) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 12:55:11.451725  610501 main.go:141] libmachine: (addons-840762) DBG | About to run SSH command:
	I0520 12:55:11.451742  610501 main.go:141] libmachine: (addons-840762) DBG | exit 0
	I0520 12:55:11.581117  610501 main.go:141] libmachine: (addons-840762) DBG | SSH cmd err, output: <nil>: 
	I0520 12:55:11.581495  610501 main.go:141] libmachine: (addons-840762) KVM machine creation complete!
	I0520 12:55:11.581804  610501 main.go:141] libmachine: (addons-840762) Calling .GetConfigRaw
	I0520 12:55:11.616351  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:11.616704  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:11.616919  610501 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 12:55:11.616938  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:11.618424  610501 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 12:55:11.618443  610501 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 12:55:11.618453  610501 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 12:55:11.618462  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:11.620876  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.621298  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:11.621331  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.621539  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:11.621744  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.621950  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.622137  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:11.622327  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:11.622536  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:11.622550  610501 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 12:55:11.732457  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:55:11.732485  610501 main.go:141] libmachine: Detecting the provisioner...
	I0520 12:55:11.732494  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:11.736096  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.736526  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:11.736565  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.736781  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:11.737000  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.737207  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.737385  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:11.737562  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:11.737730  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:11.737740  610501 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 12:55:11.846191  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 12:55:11.846307  610501 main.go:141] libmachine: found compatible host: buildroot
	I0520 12:55:11.846320  610501 main.go:141] libmachine: Provisioning with buildroot...
	I0520 12:55:11.846331  610501 main.go:141] libmachine: (addons-840762) Calling .GetMachineName
	I0520 12:55:11.846646  610501 buildroot.go:166] provisioning hostname "addons-840762"
	I0520 12:55:11.846679  610501 main.go:141] libmachine: (addons-840762) Calling .GetMachineName
	I0520 12:55:11.846901  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:11.849576  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.850003  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:11.850032  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.850162  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:11.850370  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.850550  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.850706  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:11.850877  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:11.851054  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:11.851066  610501 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-840762 && echo "addons-840762" | sudo tee /etc/hostname
	I0520 12:55:11.976542  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-840762
	
	I0520 12:55:11.976570  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:11.979683  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.979984  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:11.980011  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.980169  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:11.980409  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.980578  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.980706  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:11.980890  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:11.981083  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:11.981099  610501 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-840762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-840762/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-840762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 12:55:12.102001  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:55:12.102048  610501 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 12:55:12.102072  610501 buildroot.go:174] setting up certificates
	I0520 12:55:12.102083  610501 provision.go:84] configureAuth start
	I0520 12:55:12.102092  610501 main.go:141] libmachine: (addons-840762) Calling .GetMachineName
	I0520 12:55:12.102454  610501 main.go:141] libmachine: (addons-840762) Calling .GetIP
	I0520 12:55:12.105413  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.105813  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.105841  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.106053  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.108107  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.108401  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.108434  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.108544  610501 provision.go:143] copyHostCerts
	I0520 12:55:12.108615  610501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 12:55:12.108744  610501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 12:55:12.108804  610501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 12:55:12.108851  610501 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.addons-840762 san=[127.0.0.1 192.168.39.19 addons-840762 localhost minikube]
	I0520 12:55:12.292779  610501 provision.go:177] copyRemoteCerts
	I0520 12:55:12.292840  610501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 12:55:12.292869  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.295591  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.295908  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.295936  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.296100  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.296359  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.296512  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.296659  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:12.382793  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 12:55:12.406307  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 12:55:12.428152  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 12:55:12.450174  610501 provision.go:87] duration metric: took 348.071182ms to configureAuth
	I0520 12:55:12.450217  610501 buildroot.go:189] setting minikube options for container-runtime
	I0520 12:55:12.450425  610501 config.go:182] Loaded profile config "addons-840762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:55:12.450508  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.453476  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.453934  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.453969  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.454114  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.454327  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.454542  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.454671  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.454839  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:12.455084  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:12.455101  610501 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 12:55:12.724253  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 12:55:12.724287  610501 main.go:141] libmachine: Checking connection to Docker...
	I0520 12:55:12.724297  610501 main.go:141] libmachine: (addons-840762) Calling .GetURL
	I0520 12:55:12.725626  610501 main.go:141] libmachine: (addons-840762) DBG | Using libvirt version 6000000
	I0520 12:55:12.728077  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.728460  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.728490  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.728650  610501 main.go:141] libmachine: Docker is up and running!
	I0520 12:55:12.728678  610501 main.go:141] libmachine: Reticulating splines...
	I0520 12:55:12.728688  610501 client.go:171] duration metric: took 21.879272392s to LocalClient.Create
	I0520 12:55:12.728716  610501 start.go:167] duration metric: took 21.879341856s to libmachine.API.Create "addons-840762"
	I0520 12:55:12.728725  610501 start.go:293] postStartSetup for "addons-840762" (driver="kvm2")
	I0520 12:55:12.728742  610501 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 12:55:12.728761  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:12.729013  610501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 12:55:12.729042  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.731260  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.731556  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.731576  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.731738  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.731952  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.732118  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.732284  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:12.815344  610501 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 12:55:12.819138  610501 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 12:55:12.819172  610501 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 12:55:12.819249  610501 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 12:55:12.819273  610501 start.go:296] duration metric: took 90.538988ms for postStartSetup
	I0520 12:55:12.819320  610501 main.go:141] libmachine: (addons-840762) Calling .GetConfigRaw
	I0520 12:55:12.819902  610501 main.go:141] libmachine: (addons-840762) Calling .GetIP
	I0520 12:55:12.822344  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.822666  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.822698  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.822886  610501 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/config.json ...
	I0520 12:55:12.823055  610501 start.go:128] duration metric: took 21.993143462s to createHost
	I0520 12:55:12.823077  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.825156  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.825572  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.825598  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.825816  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.826086  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.826305  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.826500  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.826715  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:12.826884  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:12.826895  610501 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 12:55:12.937875  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716209712.902821410
	
	I0520 12:55:12.937911  610501 fix.go:216] guest clock: 1716209712.902821410
	I0520 12:55:12.937923  610501 fix.go:229] Guest: 2024-05-20 12:55:12.90282141 +0000 UTC Remote: 2024-05-20 12:55:12.823066987 +0000 UTC m=+22.107122705 (delta=79.754423ms)
	I0520 12:55:12.937959  610501 fix.go:200] guest clock delta is within tolerance: 79.754423ms
	I0520 12:55:12.937968  610501 start.go:83] releasing machines lock for "addons-840762", held for 22.108141971s
	I0520 12:55:12.937999  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:12.938309  610501 main.go:141] libmachine: (addons-840762) Calling .GetIP
	I0520 12:55:12.941417  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.941810  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.941840  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.941966  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:12.942466  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:12.942664  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:12.942768  610501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 12:55:12.942823  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.942897  610501 ssh_runner.go:195] Run: cat /version.json
	I0520 12:55:12.942918  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.945235  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.945541  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.945560  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.945578  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.945756  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.945928  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.946081  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.946102  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.946103  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.946236  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.946316  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:12.946449  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.946595  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.946736  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	W0520 12:55:13.060984  610501 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 12:55:13.061095  610501 ssh_runner.go:195] Run: systemctl --version
	I0520 12:55:13.067028  610501 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 12:55:13.231228  610501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 12:55:13.237522  610501 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 12:55:13.237591  610501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 12:55:13.252624  610501 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 12:55:13.252647  610501 start.go:494] detecting cgroup driver to use...
	I0520 12:55:13.252707  610501 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 12:55:13.267587  610501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 12:55:13.282311  610501 docker.go:217] disabling cri-docker service (if available) ...
	I0520 12:55:13.282382  610501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 12:55:13.296303  610501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 12:55:13.309620  610501 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 12:55:13.423597  610501 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 12:55:13.589483  610501 docker.go:233] disabling docker service ...
	I0520 12:55:13.589574  610501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 12:55:13.603417  610501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 12:55:13.615738  610501 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 12:55:13.729481  610501 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 12:55:13.860853  610501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 12:55:13.873990  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 12:55:13.891599  610501 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 12:55:13.891677  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.901887  610501 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 12:55:13.901958  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.912206  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.922183  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.931875  610501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 12:55:13.941703  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.951407  610501 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.967696  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.977475  610501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 12:55:13.986454  610501 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 12:55:13.986509  610501 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 12:55:13.998511  610501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 12:55:14.007925  610501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:55:14.124297  610501 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 12:55:14.265547  610501 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 12:55:14.265641  610501 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 12:55:14.270847  610501 start.go:562] Will wait 60s for crictl version
	I0520 12:55:14.270917  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:55:14.274825  610501 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 12:55:14.318641  610501 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 12:55:14.318754  610501 ssh_runner.go:195] Run: crio --version
	I0520 12:55:14.346323  610501 ssh_runner.go:195] Run: crio --version
	I0520 12:55:14.377643  610501 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 12:55:14.379895  610501 main.go:141] libmachine: (addons-840762) Calling .GetIP
	I0520 12:55:14.382720  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:14.383143  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:14.383180  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:14.383427  610501 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 12:55:14.387501  610501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:55:14.399548  610501 kubeadm.go:877] updating cluster {Name:addons-840762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-840762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 12:55:14.399660  610501 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:55:14.399703  610501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 12:55:14.429577  610501 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 12:55:14.429652  610501 ssh_runner.go:195] Run: which lz4
	I0520 12:55:14.433365  610501 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 12:55:14.437014  610501 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 12:55:14.437053  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 12:55:15.637746  610501 crio.go:462] duration metric: took 1.204422377s to copy over tarball
	I0520 12:55:15.637823  610501 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 12:55:17.802635  610501 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.164782874s)
	I0520 12:55:17.802675  610501 crio.go:469] duration metric: took 2.164898269s to extract the tarball
	I0520 12:55:17.802686  610501 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 12:55:17.838706  610501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 12:55:17.877747  610501 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 12:55:17.877773  610501 cache_images.go:84] Images are preloaded, skipping loading
	I0520 12:55:17.877783  610501 kubeadm.go:928] updating node { 192.168.39.19 8443 v1.30.1 crio true true} ...
	I0520 12:55:17.877923  610501 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-840762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-840762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 12:55:17.878011  610501 ssh_runner.go:195] Run: crio config
	I0520 12:55:17.922732  610501 cni.go:84] Creating CNI manager for ""
	I0520 12:55:17.922758  610501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 12:55:17.922785  610501 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 12:55:17.922825  610501 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-840762 NodeName:addons-840762 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 12:55:17.922996  610501 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-840762"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 12:55:17.923077  610501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 12:55:17.932833  610501 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 12:55:17.932937  610501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 12:55:17.941978  610501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0520 12:55:17.957376  610501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 12:55:17.972370  610501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0520 12:55:17.987265  610501 ssh_runner.go:195] Run: grep 192.168.39.19	control-plane.minikube.internal$ /etc/hosts
	I0520 12:55:17.990708  610501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:55:18.001573  610501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:55:18.127654  610501 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:55:18.143797  610501 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762 for IP: 192.168.39.19
	I0520 12:55:18.143820  610501 certs.go:194] generating shared ca certs ...
	I0520 12:55:18.143842  610501 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.144003  610501 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 12:55:18.358697  610501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt ...
	I0520 12:55:18.358733  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt: {Name:mk0337969521f8fcb91840a13b9dacd1361e0416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.358935  610501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key ...
	I0520 12:55:18.358950  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key: {Name:mk0b3018854c3a76c6bc712c400145554051e5cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.359066  610501 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 12:55:18.637573  610501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt ...
	I0520 12:55:18.637611  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt: {Name:mk4030326ff4bd93acf0ae11bc67ee09461f2725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.637793  610501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key ...
	I0520 12:55:18.637804  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key: {Name:mk368b7d66fa86a67c9ef13f55a63c8fbe995e35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.637889  610501 certs.go:256] generating profile certs ...
	I0520 12:55:18.637948  610501 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.key
	I0520 12:55:18.637962  610501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt with IP's: []
	I0520 12:55:18.765434  610501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt ...
	I0520 12:55:18.765467  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: {Name:mk555ad1a22ae83e71bd1d88db4cd731d3a9df3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.765635  610501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.key ...
	I0520 12:55:18.765646  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.key: {Name:mkc4037f80e62a174b1c3df78060c4c466e65958 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.765712  610501 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key.1a6be2da
	I0520 12:55:18.765730  610501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt.1a6be2da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19]
	I0520 12:55:18.937615  610501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt.1a6be2da ...
	I0520 12:55:18.937656  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt.1a6be2da: {Name:mk5a01215158cf3231fad08bb78d8a3dfa212c05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.937851  610501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key.1a6be2da ...
	I0520 12:55:18.937873  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key.1a6be2da: {Name:mk298b016f1b857a88dbdb4cbaadf8e747393b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.937973  610501 certs.go:381] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt.1a6be2da -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt
	I0520 12:55:18.938079  610501 certs.go:385] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key.1a6be2da -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key
	I0520 12:55:18.938151  610501 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.key
	I0520 12:55:18.938179  610501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.crt with IP's: []
	I0520 12:55:19.226331  610501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.crt ...
	I0520 12:55:19.226369  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.crt: {Name:mk192ed701b920896d7fa7fbd1cf8e177461df3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:19.226564  610501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.key ...
	I0520 12:55:19.226582  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.key: {Name:mk3ad4b89a8ee430000e1f8b8ab63f33e943010e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:19.226798  610501 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 12:55:19.226843  610501 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 12:55:19.226878  610501 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 12:55:19.226916  610501 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 12:55:19.227551  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 12:55:19.253380  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 12:55:19.275654  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 12:55:19.297712  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 12:55:19.319707  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0520 12:55:19.341205  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 12:55:19.365239  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 12:55:19.390731  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 12:55:19.416007  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 12:55:19.438628  610501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 12:55:19.454417  610501 ssh_runner.go:195] Run: openssl version
	I0520 12:55:19.459803  610501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 12:55:19.471875  610501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:55:19.476597  610501 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:55:19.476677  610501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:55:19.483260  610501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 12:55:19.497343  610501 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 12:55:19.501416  610501 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 12:55:19.501498  610501 kubeadm.go:391] StartCluster: {Name:addons-840762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-840762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:55:19.501602  610501 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 12:55:19.501684  610501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 12:55:19.545075  610501 cri.go:89] found id: ""
	I0520 12:55:19.545173  610501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 12:55:19.554806  610501 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 12:55:19.568214  610501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 12:55:19.577374  610501 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 12:55:19.577399  610501 kubeadm.go:156] found existing configuration files:
	
	I0520 12:55:19.577443  610501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 12:55:19.585694  610501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 12:55:19.585763  610501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 12:55:19.594289  610501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 12:55:19.602494  610501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 12:55:19.602553  610501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 12:55:19.611323  610501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 12:55:19.619340  610501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 12:55:19.619399  610501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 12:55:19.628227  610501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 12:55:19.636652  610501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 12:55:19.636728  610501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 12:55:19.645298  610501 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 12:55:19.702471  610501 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 12:55:19.702580  610501 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 12:55:19.825588  610501 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 12:55:19.825748  610501 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 12:55:19.825886  610501 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 12:55:20.025596  610501 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 12:55:20.083699  610501 out.go:204]   - Generating certificates and keys ...
	I0520 12:55:20.083850  610501 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 12:55:20.083934  610501 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 12:55:20.092217  610501 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 12:55:20.364436  610501 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 12:55:20.502138  610501 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 12:55:20.564527  610501 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 12:55:20.703162  610501 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 12:55:20.703407  610501 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-840762 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I0520 12:55:20.770361  610501 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 12:55:20.884233  610501 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-840762 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I0520 12:55:21.012631  610501 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 12:55:21.208632  610501 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 12:55:21.332544  610501 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 12:55:21.332752  610501 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 12:55:21.589278  610501 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 12:55:21.706399  610501 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 12:55:21.812525  610501 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 12:55:21.987255  610501 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 12:55:22.050057  610501 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 12:55:22.050588  610501 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 12:55:22.054797  610501 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 12:55:22.057239  610501 out.go:204]   - Booting up control plane ...
	I0520 12:55:22.057342  610501 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 12:55:22.057410  610501 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 12:55:22.057492  610501 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 12:55:22.071354  610501 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 12:55:22.072252  610501 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 12:55:22.072345  610501 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 12:55:22.194444  610501 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 12:55:22.194562  610501 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 12:55:23.195085  610501 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001405192s
	I0520 12:55:23.195201  610501 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 12:55:28.694415  610501 kubeadm.go:309] [api-check] The API server is healthy after 5.502847931s
	I0520 12:55:28.714022  610501 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 12:55:28.726753  610501 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 12:55:28.761883  610501 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 12:55:28.762170  610501 kubeadm.go:309] [mark-control-plane] Marking the node addons-840762 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 12:55:28.775335  610501 kubeadm.go:309] [bootstrap-token] Using token: ujdvgq.4r4gsjxdolox8f2t
	I0520 12:55:28.777700  610501 out.go:204]   - Configuring RBAC rules ...
	I0520 12:55:28.777840  610501 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 12:55:28.782202  610501 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 12:55:28.794168  610501 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 12:55:28.797442  610501 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 12:55:28.800674  610501 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 12:55:28.804165  610501 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 12:55:29.101623  610501 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 12:55:29.550656  610501 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 12:55:30.105708  610501 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 12:55:30.106638  610501 kubeadm.go:309] 
	I0520 12:55:30.106743  610501 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 12:55:30.106763  610501 kubeadm.go:309] 
	I0520 12:55:30.106876  610501 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 12:55:30.106899  610501 kubeadm.go:309] 
	I0520 12:55:30.106949  610501 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 12:55:30.107030  610501 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 12:55:30.107100  610501 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 12:55:30.107110  610501 kubeadm.go:309] 
	I0520 12:55:30.107159  610501 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 12:55:30.107165  610501 kubeadm.go:309] 
	I0520 12:55:30.107205  610501 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 12:55:30.107211  610501 kubeadm.go:309] 
	I0520 12:55:30.107253  610501 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 12:55:30.107333  610501 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 12:55:30.107424  610501 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 12:55:30.107431  610501 kubeadm.go:309] 
	I0520 12:55:30.107535  610501 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 12:55:30.107635  610501 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 12:55:30.107644  610501 kubeadm.go:309] 
	I0520 12:55:30.107756  610501 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ujdvgq.4r4gsjxdolox8f2t \
	I0520 12:55:30.107892  610501 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa \
	I0520 12:55:30.107936  610501 kubeadm.go:309] 	--control-plane 
	I0520 12:55:30.107945  610501 kubeadm.go:309] 
	I0520 12:55:30.108063  610501 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 12:55:30.108079  610501 kubeadm.go:309] 
	I0520 12:55:30.108173  610501 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ujdvgq.4r4gsjxdolox8f2t \
	I0520 12:55:30.108271  610501 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa 
	I0520 12:55:30.108549  610501 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 12:55:30.108578  610501 cni.go:84] Creating CNI manager for ""
	I0520 12:55:30.108590  610501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 12:55:30.111265  610501 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 12:55:30.113507  610501 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 12:55:30.123451  610501 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 12:55:30.139800  610501 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 12:55:30.139944  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-840762 minikube.k8s.io/updated_at=2024_05_20T12_55_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45 minikube.k8s.io/name=addons-840762 minikube.k8s.io/primary=true
	I0520 12:55:30.139947  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:30.244780  610501 ops.go:34] apiserver oom_adj: -16
	I0520 12:55:30.244858  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:30.745128  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:31.245492  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:31.745341  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:32.244914  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:32.745755  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:33.245160  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:33.745226  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:34.245731  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:34.745905  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:35.245566  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:35.745226  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:36.245227  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:36.745121  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:37.245280  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:37.745226  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:38.245665  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:38.745064  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:39.245512  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:39.745828  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:40.245009  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:40.745277  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:41.245343  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:41.745342  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:42.245464  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:42.745186  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:42.919515  610501 kubeadm.go:1107] duration metric: took 12.779637158s to wait for elevateKubeSystemPrivileges
	W0520 12:55:42.919570  610501 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 12:55:42.919582  610501 kubeadm.go:393] duration metric: took 23.418090172s to StartCluster
	I0520 12:55:42.919607  610501 settings.go:142] acquiring lock: {Name:mkfd2b5ade573ba8a1562334a87bc71c9f86de47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:42.919772  610501 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 12:55:42.920344  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/kubeconfig: {Name:mkf2ddc28a03db07b0a9823212c8450f3ae4cfa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:42.920956  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 12:55:42.921004  610501 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:55:42.923778  610501 out.go:177] * Verifying Kubernetes components...
	I0520 12:55:42.921047  610501 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0520 12:55:42.921275  610501 config.go:182] Loaded profile config "addons-840762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:55:42.926173  610501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:55:42.926185  610501 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-840762"
	I0520 12:55:42.926207  610501 addons.go:69] Setting inspektor-gadget=true in profile "addons-840762"
	I0520 12:55:42.926220  610501 addons.go:69] Setting metrics-server=true in profile "addons-840762"
	I0520 12:55:42.926235  610501 addons.go:69] Setting helm-tiller=true in profile "addons-840762"
	I0520 12:55:42.926254  610501 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-840762"
	I0520 12:55:42.926257  610501 addons.go:69] Setting cloud-spanner=true in profile "addons-840762"
	I0520 12:55:42.926263  610501 addons.go:69] Setting ingress-dns=true in profile "addons-840762"
	I0520 12:55:42.926270  610501 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-840762"
	I0520 12:55:42.926271  610501 addons.go:69] Setting storage-provisioner=true in profile "addons-840762"
	I0520 12:55:42.926277  610501 addons.go:234] Setting addon cloud-spanner=true in "addons-840762"
	I0520 12:55:42.926279  610501 addons.go:69] Setting gcp-auth=true in profile "addons-840762"
	I0520 12:55:42.926283  610501 addons.go:234] Setting addon ingress-dns=true in "addons-840762"
	I0520 12:55:42.926284  610501 addons.go:69] Setting default-storageclass=true in profile "addons-840762"
	I0520 12:55:42.926297  610501 mustload.go:65] Loading cluster: addons-840762
	I0520 12:55:42.926305  610501 addons.go:69] Setting registry=true in profile "addons-840762"
	I0520 12:55:42.926313  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926319  610501 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-840762"
	I0520 12:55:42.926323  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926321  610501 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-840762"
	I0520 12:55:42.926335  610501 addons.go:234] Setting addon registry=true in "addons-840762"
	I0520 12:55:42.926338  610501 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-840762"
	I0520 12:55:42.926364  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926510  610501 config.go:182] Loaded profile config "addons-840762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:55:42.926801  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.926249  610501 addons.go:234] Setting addon inspektor-gadget=true in "addons-840762"
	I0520 12:55:42.926249  610501 addons.go:234] Setting addon metrics-server=true in "addons-840762"
	I0520 12:55:42.926856  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.926862  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.926869  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926877  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.926889  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926258  610501 addons.go:69] Setting ingress=true in profile "addons-840762"
	I0520 12:55:42.926904  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.926907  610501 addons.go:69] Setting volumesnapshots=true in profile "addons-840762"
	I0520 12:55:42.926926  610501 addons.go:234] Setting addon ingress=true in "addons-840762"
	I0520 12:55:42.926932  610501 addons.go:234] Setting addon volumesnapshots=true in "addons-840762"
	I0520 12:55:42.926956  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926960  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926250  610501 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-840762"
	I0520 12:55:42.927007  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.927203  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927223  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927277  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927304  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927313  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927321  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.926840  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927342  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.926273  610501 addons.go:234] Setting addon helm-tiller=true in "addons-840762"
	I0520 12:55:42.927353  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.926299  610501 addons.go:234] Setting addon storage-provisioner=true in "addons-840762"
	I0520 12:55:42.927324  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927371  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.926313  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.927403  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927420  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927438  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.926211  610501 addons.go:69] Setting yakd=true in profile "addons-840762"
	I0520 12:55:42.927468  610501 addons.go:234] Setting addon yakd=true in "addons-840762"
	I0520 12:55:42.927472  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927519  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.927850  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927890  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927962  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.928030  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.928378  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.928410  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.928472  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.928500  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.949431  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0520 12:55:42.949456  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44125
	I0520 12:55:42.949517  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42323
	I0520 12:55:42.949805  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0520 12:55:42.950251  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.950259  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.950280  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.950304  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.961815  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.961998  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.962130  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.962181  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I0520 12:55:42.962318  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0520 12:55:42.962475  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.962887  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.963010  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.963210  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.963226  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.963369  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.963380  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.963502  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.963513  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.963640  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.963651  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.963820  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.964552  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.964602  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.964934  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.964957  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.965029  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.965087  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.965217  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.965230  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.965317  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.965630  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.965679  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.965788  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.966394  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.966436  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.966662  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.966702  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:42.967039  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.967085  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.967295  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.967336  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.968919  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34005
	I0520 12:55:42.969170  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.969564  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.969595  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.969824  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.970420  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.970440  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.970891  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.971471  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.971504  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.983702  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38217
	I0520 12:55:42.989821  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.990621  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.990649  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.991055  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.991712  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.991761  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.002410  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33797
	I0520 12:55:43.003132  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.003287  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I0520 12:55:43.003423  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
	I0520 12:55:43.003921  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.004372  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I0520 12:55:43.004660  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.004675  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.004807  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.004818  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.004868  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.005179  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.005279  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.005691  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.005760  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0520 12:55:43.006499  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.006546  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.006783  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.007377  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.007400  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.007554  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.007567  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.008005  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.008037  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.008289  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34045
	I0520 12:55:43.008399  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.008419  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.008780  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.008992  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.009055  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.009063  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.009221  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.009752  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.009789  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.010310  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38685
	I0520 12:55:43.010592  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.010621  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.011044  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.011105  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.011348  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.011840  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.011881  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.012129  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.012289  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.012304  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.015140  610501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0520 12:55:43.012670  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.016270  610501 addons.go:234] Setting addon default-storageclass=true in "addons-840762"
	I0520 12:55:43.017402  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:43.017801  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.017842  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.020141  610501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 12:55:43.019350  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35553
	I0520 12:55:43.019379  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38743
	I0520 12:55:43.019420  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.021536  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42885
	I0520 12:55:43.022303  610501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 12:55:43.024787  610501 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 12:55:43.024809  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0520 12:55:43.024831  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.023254  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.023306  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.023310  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43629
	I0520 12:55:43.023345  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39199
	I0520 12:55:43.023354  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.026350  610501 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-840762"
	I0520 12:55:43.026398  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:43.026788  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.026828  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.027387  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34397
	I0520 12:55:43.027626  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.027638  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.028051  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.028314  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0520 12:55:43.028592  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.028611  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.029136  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.029215  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.029238  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.029295  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.029296  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.029315  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.029346  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.029505  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.029572  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.029626  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.029815  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.029880  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.030169  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.030776  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.030822  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.031146  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.031163  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.031323  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.034413  610501 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0520 12:55:43.031845  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.031879  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.031970  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.032176  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.032375  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.036749  610501 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 12:55:43.036763  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0520 12:55:43.036787  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.037457  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.037481  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.037723  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.037740  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.037816  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.038160  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.038379  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.038890  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I0520 12:55:43.039115  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37507
	I0520 12:55:43.039514  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.039999  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.040190  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.040214  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.040290  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.040641  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.040675  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.040795  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.040809  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.040858  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.040862  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.043266  610501 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.0
	I0520 12:55:43.041720  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.042600  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.042944  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.043023  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.043541  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.044232  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.044298  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.044797  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.045484  610501 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0520 12:55:43.045497  610501 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0520 12:55:43.045518  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.045599  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.045639  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.045667  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.048013  610501 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0520 12:55:43.046613  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.046718  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.046798  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.048712  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0520 12:55:43.048712  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46763
	I0520 12:55:43.049336  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.050022  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.050433  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.050492  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0520 12:55:43.050855  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.051378  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38695
	I0520 12:55:43.052681  610501 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0520 12:55:43.052723  610501 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0520 12:55:43.052806  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.053613  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.053643  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.054003  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.054050  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.055062  610501 out.go:177]   - Using image docker.io/registry:2.8.3
	I0520 12:55:43.055263  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.055499  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.057312  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.057625  610501 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 12:55:43.058419  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.058451  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.058549  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.059404  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I0520 12:55:43.059425  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0520 12:55:43.059434  610501 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 12:55:43.059639  610501 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0520 12:55:43.059783  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.060180  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.060216  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42655
	I0520 12:55:43.061533  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.061624  610501 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0520 12:55:43.061636  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.061845  610501 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0520 12:55:43.061908  610501 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 12:55:43.061914  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0520 12:55:43.062300  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.063479  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.063614  610501 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 12:55:43.063635  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 12:55:43.063653  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.063658  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0520 12:55:43.063674  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.063734  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.063764  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.063794  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.064448  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.064498  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.064525  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.064620  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.064627  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.064717  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.064761  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.064800  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.065417  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.069387  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.069428  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.069390  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.069457  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.069560  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.069579  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.073328  610501 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0520 12:55:43.070346  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.070518  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.071145  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.071398  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.071491  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.072013  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.072535  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.073487  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.073620  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.074419  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.074767  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.074877  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42271
	I0520 12:55:43.075149  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.076245  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0520 12:55:43.076377  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.076274  610501 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0520 12:55:43.076312  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.076399  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.076485  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.076491  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.076518  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.076262  610501 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 12:55:43.076630  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.076642  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.076689  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.076799  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.076883  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.076944  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.077301  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.078456  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0520 12:55:43.078484  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.078503  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.078554  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.078573  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.078590  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0520 12:55:43.078624  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.078637  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.078804  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.078805  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.078813  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.078827  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.078918  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.079277  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.080942  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0520 12:55:43.080976  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.083192  610501 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0520 12:55:43.083214  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0520 12:55:43.083235  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.083265  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.083750  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.083781  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.083802  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.083820  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.083933  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.084415  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.085529  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0520 12:55:43.086510  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.087336  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.087938  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.088370  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.089371  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0520 12:55:43.088654  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.088714  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.089430  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.089714  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.091686  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0520 12:55:43.091791  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.091960  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.091975  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.093882  610501 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0520 12:55:43.096277  610501 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0520 12:55:43.096308  610501 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0520 12:55:43.096334  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.093957  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.094177  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.094370  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.097828  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43411
	I0520 12:55:43.098616  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0520 12:55:43.098900  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.098969  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.099332  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.099866  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.100798  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0520 12:55:43.102742  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0520 12:55:43.102765  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0520 12:55:43.100830  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.102790  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.100561  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.102802  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.102789  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39027
	I0520 12:55:43.101525  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.102862  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.103030  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.103224  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.103401  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.103410  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.103779  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.103815  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.105233  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.105267  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.105428  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.105719  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.105859  610501 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 12:55:43.105875  610501 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 12:55:43.105887  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.106101  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.106122  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.106160  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.106373  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.106425  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.106575  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.106861  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.107019  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.108154  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.110645  610501 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0520 12:55:43.108938  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.110686  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.109448  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.113428  610501 out.go:177]   - Using image docker.io/busybox:stable
	I0520 12:55:43.115405  610501 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 12:55:43.113449  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.113676  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.115433  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0520 12:55:43.115464  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.115705  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.115895  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.118641  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.119117  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.119150  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.119343  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.119533  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.119694  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.119816  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.573616  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 12:55:43.619918  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 12:55:43.623606  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 12:55:43.643211  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0520 12:55:43.683331  610501 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:55:43.683420  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 12:55:43.685462  610501 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0520 12:55:43.685482  610501 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0520 12:55:43.701839  610501 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0520 12:55:43.701864  610501 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0520 12:55:43.716671  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 12:55:43.728860  610501 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0520 12:55:43.728882  610501 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0520 12:55:43.749092  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 12:55:43.752362  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 12:55:43.759380  610501 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0520 12:55:43.759401  610501 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0520 12:55:43.768880  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0520 12:55:43.768902  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0520 12:55:43.776942  610501 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0520 12:55:43.776981  610501 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0520 12:55:43.794490  610501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 12:55:43.794512  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0520 12:55:43.876312  610501 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0520 12:55:43.876350  610501 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0520 12:55:43.928322  610501 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0520 12:55:43.928352  610501 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0520 12:55:43.980917  610501 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0520 12:55:43.980943  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0520 12:55:43.985401  610501 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0520 12:55:43.985423  610501 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0520 12:55:44.010497  610501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 12:55:44.010530  610501 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 12:55:44.025070  610501 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0520 12:55:44.025103  610501 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0520 12:55:44.025300  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0520 12:55:44.025326  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0520 12:55:44.097831  610501 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0520 12:55:44.097860  610501 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0520 12:55:44.099542  610501 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0520 12:55:44.099567  610501 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0520 12:55:44.109990  610501 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0520 12:55:44.110015  610501 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0520 12:55:44.125277  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0520 12:55:44.152567  610501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 12:55:44.152593  610501 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 12:55:44.183917  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0520 12:55:44.199196  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0520 12:55:44.199234  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0520 12:55:44.278037  610501 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0520 12:55:44.278067  610501 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0520 12:55:44.293166  610501 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0520 12:55:44.293217  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0520 12:55:44.297324  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0520 12:55:44.297351  610501 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0520 12:55:44.315561  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 12:55:44.346264  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0520 12:55:44.346298  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0520 12:55:44.453370  610501 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0520 12:55:44.453396  610501 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0520 12:55:44.510982  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0520 12:55:44.586650  610501 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 12:55:44.586684  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0520 12:55:44.611553  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0520 12:55:44.611584  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0520 12:55:44.726323  610501 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0520 12:55:44.726349  610501 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0520 12:55:44.881456  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0520 12:55:44.881482  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0520 12:55:44.890866  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 12:55:44.927590  610501 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 12:55:44.927619  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0520 12:55:45.137317  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0520 12:55:45.137345  610501 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0520 12:55:45.209075  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 12:55:45.441214  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0520 12:55:45.441241  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0520 12:55:45.828932  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0520 12:55:45.828994  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0520 12:55:46.257170  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 12:55:46.257208  610501 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0520 12:55:46.498819  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 12:55:47.266993  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.693329132s)
	I0520 12:55:47.267056  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:47.267070  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:47.267417  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:47.267482  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:47.267504  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:47.267520  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:47.267530  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:47.267892  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:47.267912  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:50.073084  610501 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0520 12:55:50.073138  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:50.076118  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:50.076632  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:50.076665  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:50.076958  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:50.077217  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:50.077455  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:50.077652  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:50.468021  610501 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0520 12:55:50.521617  610501 addons.go:234] Setting addon gcp-auth=true in "addons-840762"
	I0520 12:55:50.521694  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:50.522184  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:50.522239  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:50.553174  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42275
	I0520 12:55:50.553754  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:50.554480  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:50.554514  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:50.554880  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:50.555571  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:50.555609  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:50.572015  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44627
	I0520 12:55:50.572479  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:50.573041  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:50.573078  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:50.573484  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:50.573698  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:50.575484  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:50.575739  610501 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0520 12:55:50.575769  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:50.579095  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:50.579655  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:50.579690  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:50.579792  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:50.580013  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:50.580346  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:50.580587  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:51.388578  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.768609397s)
	I0520 12:55:51.388647  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.388650  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.765002737s)
	I0520 12:55:51.388698  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.388707  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.745461801s)
	I0520 12:55:51.388717  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.388734  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.388746  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.388661  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.388887  610501 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.705429993s)
	I0520 12:55:51.388915  610501 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0520 12:55:51.388936  610501 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.705565209s)
	I0520 12:55:51.389084  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.389097  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.389107  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.389116  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.389209  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.389232  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.389259  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.389270  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.389296  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.389326  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.389343  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.389349  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.389360  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.389372  610501 addons.go:470] Verifying addon ingress=true in "addons-840762"
	I0520 12:55:51.389379  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.672674787s)
	I0520 12:55:51.389405  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.389425  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.392370  610501 out.go:177] * Verifying ingress addon...
	I0520 12:55:51.389528  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.640405425s)
	I0520 12:55:51.389584  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.637201371s)
	I0520 12:55:51.389624  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.264322968s)
	I0520 12:55:51.389661  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.20570414s)
	I0520 12:55:51.389732  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.074144721s)
	I0520 12:55:51.389772  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.878747953s)
	I0520 12:55:51.389865  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.498962158s)
	I0520 12:55:51.389933  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.180826737s)
	I0520 12:55:51.389965  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.389973  610501 node_ready.go:35] waiting up to 6m0s for node "addons-840762" to be "Ready" ...
	I0520 12:55:51.389991  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.390011  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.389352  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.390014  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.394170  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394193  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394192  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.394207  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394227  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394229  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394240  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394253  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394268  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394281  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394210  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394296  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394300  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394296  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394313  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394328  610501 main.go:141] libmachine: Making call to close driver server
	W0520 12:55:51.394209  610501 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 12:55:51.394339  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394339  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394380  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.394387  610501 retry.go:31] will retry after 303.389823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 12:55:51.395046  610501 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0520 12:55:51.395166  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395197  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395199  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395214  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395218  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395233  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395245  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395262  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395272  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395276  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395280  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.395288  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.395291  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395307  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395313  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395321  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.395263  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395338  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395345  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395354  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395361  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.395367  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.395429  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395448  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395459  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395466  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.395481  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.395204  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395327  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.395347  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.395846  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.396442  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.396480  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.396488  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.398870  610501 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-840762 service yakd-dashboard -n yakd-dashboard
	
	I0520 12:55:51.396611  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.396643  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.396663  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.396677  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.396695  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.396696  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.396721  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.396732  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.396855  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.400153  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.400898  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.400913  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.400902  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.400962  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.400970  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.400973  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.400980  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.400990  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.401004  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.400980  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.401036  610501 addons.go:470] Verifying addon metrics-server=true in "addons-840762"
	I0520 12:55:51.400203  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.401684  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.401704  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.401745  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.402068  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.402086  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.402091  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.402097  610501 addons.go:470] Verifying addon registry=true in "addons-840762"
	I0520 12:55:51.405187  610501 out.go:177] * Verifying registry addon...
	I0520 12:55:51.408123  610501 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0520 12:55:51.437541  610501 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0520 12:55:51.437563  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:51.449131  610501 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0520 12:55:51.449151  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:51.457909  610501 node_ready.go:49] node "addons-840762" has status "Ready":"True"
	I0520 12:55:51.457932  610501 node_ready.go:38] duration metric: took 63.66746ms for node "addons-840762" to be "Ready" ...
	I0520 12:55:51.457941  610501 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 12:55:51.478924  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.478955  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.479239  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.479251  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.479266  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.479268  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.479509  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.479526  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	W0520 12:55:51.479651  610501 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0520 12:55:51.494377  610501 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bxb6r" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.508970  610501 pod_ready.go:92] pod "coredns-7db6d8ff4d-bxb6r" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:51.508991  610501 pod_ready.go:81] duration metric: took 14.583357ms for pod "coredns-7db6d8ff4d-bxb6r" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.509001  610501 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vp4g8" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.544741  610501 pod_ready.go:92] pod "coredns-7db6d8ff4d-vp4g8" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:51.544772  610501 pod_ready.go:81] duration metric: took 35.763404ms for pod "coredns-7db6d8ff4d-vp4g8" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.544784  610501 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.576819  610501 pod_ready.go:92] pod "etcd-addons-840762" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:51.576843  610501 pod_ready.go:81] duration metric: took 32.050234ms for pod "etcd-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.576852  610501 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.592484  610501 pod_ready.go:92] pod "kube-apiserver-addons-840762" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:51.592520  610501 pod_ready.go:81] duration metric: took 15.660119ms for pod "kube-apiserver-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.592536  610501 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.698831  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 12:55:51.797633  610501 pod_ready.go:92] pod "kube-controller-manager-addons-840762" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:51.797657  610501 pod_ready.go:81] duration metric: took 205.113267ms for pod "kube-controller-manager-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.797669  610501 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mpkr9" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.892953  610501 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-840762" context rescaled to 1 replicas
	I0520 12:55:51.899463  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:51.912554  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:52.200864  610501 pod_ready.go:92] pod "kube-proxy-mpkr9" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:52.200894  610501 pod_ready.go:81] duration metric: took 403.210884ms for pod "kube-proxy-mpkr9" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:52.200908  610501 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:52.404611  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:52.417071  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:52.607922  610501 pod_ready.go:92] pod "kube-scheduler-addons-840762" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:52.607946  610501 pod_ready.go:81] duration metric: took 407.031521ms for pod "kube-scheduler-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:52.607957  610501 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:52.938316  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:52.939704  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:53.105590  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.606697582s)
	I0520 12:55:53.105615  610501 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.529845767s)
	I0520 12:55:53.105664  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:53.105679  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:53.108268  610501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 12:55:53.105995  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:53.106025  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:53.110677  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:53.110703  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:53.112892  610501 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0520 12:55:53.110719  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:53.115284  610501 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0520 12:55:53.115305  610501 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0520 12:55:53.115627  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:53.115673  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:53.115691  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:53.115708  610501 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-840762"
	I0520 12:55:53.118485  610501 out.go:177] * Verifying csi-hostpath-driver addon...
	I0520 12:55:53.122364  610501 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0520 12:55:53.138587  610501 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0520 12:55:53.138615  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:53.192835  610501 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0520 12:55:53.192870  610501 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0520 12:55:53.284131  610501 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 12:55:53.284160  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0520 12:55:53.399393  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:53.413779  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:53.418308  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 12:55:53.628280  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:53.677186  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.97829441s)
	I0520 12:55:53.677265  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:53.677280  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:53.677596  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:53.677626  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:53.677630  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:53.677637  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:53.677662  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:53.677944  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:53.677959  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:53.903390  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:53.913905  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:54.129023  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:54.400578  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:54.414433  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:54.634153  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:54.639118  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:55:54.957073  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:54.957497  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:54.969504  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.551140451s)
	I0520 12:55:54.969566  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:54.969580  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:54.969979  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:54.969997  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:54.969998  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:54.970008  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:54.970019  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:54.970333  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:54.970359  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:54.970372  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:54.971645  610501 addons.go:470] Verifying addon gcp-auth=true in "addons-840762"
	I0520 12:55:54.974788  610501 out.go:177] * Verifying gcp-auth addon...
	I0520 12:55:54.977686  610501 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0520 12:55:54.992478  610501 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0520 12:55:54.992501  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:55.127400  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:55.399268  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:55.413367  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:55.481152  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:55.627381  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:55.916014  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:55.918171  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:55.981718  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:56.127730  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:56.399560  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:56.413077  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:56.482224  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:56.627478  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:56.900468  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:56.912466  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:56.981665  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:57.115037  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:55:57.130520  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:57.400035  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:57.413623  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:57.481613  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:57.629820  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:57.900120  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:57.915039  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:57.981464  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:58.127457  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:58.400777  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:58.414573  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:58.481462  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:58.628832  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:58.899601  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:58.914331  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:58.982255  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:59.115366  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:55:59.133101  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:59.401812  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:59.419535  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:59.481225  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:59.631104  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:59.902353  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:59.912317  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:59.981330  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:00.128485  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:00.401561  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:00.430286  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:00.482144  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:00.628293  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:00.899691  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:00.915101  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:00.982008  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:01.129239  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:01.399224  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:01.414726  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:01.481942  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:01.616921  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:01.628729  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:01.900780  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:01.913368  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:01.981214  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:02.127371  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:02.401377  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:02.414207  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:02.482101  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:02.627879  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:02.900216  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:02.914014  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:02.982218  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:03.130013  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:03.400273  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:03.413347  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:03.481203  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:03.629010  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:03.899658  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:03.913498  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:04.022081  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:04.115681  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:04.128931  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:04.399719  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:04.413265  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:04.480949  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:04.630465  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:04.901162  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:04.915827  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:04.982611  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:05.127045  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:05.399804  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:05.413527  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:05.482587  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:05.628542  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:05.900077  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:05.913575  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:05.981299  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:06.131335  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:06.399067  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:06.413005  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:06.482481  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:06.617357  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:06.629066  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:06.899839  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:06.913012  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:06.982047  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:07.132364  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:07.399705  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:07.417400  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:07.481431  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:07.628233  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:07.900194  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:07.912856  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:07.981096  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:08.130863  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:08.399114  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:08.421325  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:08.488216  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:08.626810  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:08.899746  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:08.913412  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:08.981447  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:09.114772  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:09.127612  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:09.399816  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:09.414275  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:09.481644  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:09.628774  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:09.900228  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:09.915686  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:09.983410  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:10.128911  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:10.399503  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:10.413047  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:10.482114  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:10.627627  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:10.900120  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:10.912741  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:10.981653  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:11.127586  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:11.399736  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:11.415842  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:11.482111  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:11.616098  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:11.631401  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:11.899584  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:11.914011  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:11.982488  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:12.133642  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:12.404826  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:12.415781  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:12.482240  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:12.627875  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:12.900429  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:12.913578  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:12.982373  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:13.128350  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:13.400020  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:13.412649  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:13.481828  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:13.627553  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:13.899893  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:13.912654  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:13.981503  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:14.115122  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:14.129175  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:14.400146  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:14.413152  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:14.481089  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:14.628054  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:14.900376  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:14.920739  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:14.982583  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:15.127618  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:15.400262  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:15.415277  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:15.482039  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:15.627946  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:15.900718  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:15.912777  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:15.982140  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:16.129993  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:16.399519  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:16.412993  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:16.482054  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:16.614742  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:16.628387  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:16.902864  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:16.916738  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:16.982514  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:17.127713  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:17.398762  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:17.416228  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:17.481442  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:17.628109  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:17.901062  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:17.915833  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:17.983591  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:18.128602  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:18.400312  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:18.413380  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:18.481469  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:18.627648  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:18.900162  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:18.913170  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:18.981679  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:19.114147  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:19.127641  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:19.399059  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:19.416675  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:19.481893  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:19.628587  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:19.901500  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:19.914861  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:19.982268  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:20.127892  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:20.400086  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:20.412871  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:20.481643  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:20.631895  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:20.899376  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:20.913218  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:20.983029  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:21.115273  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:21.128235  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:21.398928  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:21.412581  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:21.481844  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:21.628150  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:21.899645  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:21.913721  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:21.981633  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:22.127985  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:22.400392  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:22.413600  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:22.482801  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:22.628019  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:22.900239  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:22.913015  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:22.981463  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:23.139117  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:23.140261  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:23.399288  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:23.415368  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:23.481661  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:23.629617  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:23.902440  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:23.915257  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:23.981352  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:24.129929  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:24.399488  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:24.413165  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:24.482158  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:24.627817  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:24.899083  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:24.915671  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:24.981425  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:25.127985  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:25.399318  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:25.413105  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:25.482011  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:25.613886  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:25.627368  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:25.902246  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:25.912609  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:25.981536  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:26.129732  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:26.529301  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:26.529596  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:26.529663  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:26.633421  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:26.901177  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:26.915422  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:26.981413  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:27.127789  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:27.398754  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:27.413042  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:27.482631  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:27.614073  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:27.629448  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:27.900640  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:27.913221  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:27.981368  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:28.132334  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:28.399797  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:28.413632  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:28.481152  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:28.628716  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:28.900159  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:28.914554  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:28.981591  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:29.127504  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:29.399722  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:29.414894  610501 kapi.go:107] duration metric: took 38.006762133s to wait for kubernetes.io/minikube-addons=registry ...
	I0520 12:56:29.481634  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:29.614187  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:29.627857  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:29.899322  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:29.981345  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:30.128550  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:30.400316  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:30.481555  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:30.627746  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:30.900189  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:30.982356  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:31.129538  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:31.400422  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:31.481492  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:31.629916  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:31.899144  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:31.981857  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:32.114220  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:32.127498  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:32.399699  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:32.482072  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:32.651101  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:32.899211  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:32.981322  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:33.127482  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:33.401190  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:33.501374  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:33.628422  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:33.900401  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:33.981380  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:34.127915  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:34.400211  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:34.484293  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:34.614543  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:34.627483  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:34.902843  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:34.981683  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:35.127848  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:35.398956  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:35.481444  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:35.626983  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:35.900313  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:35.980852  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:36.128263  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:36.401318  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:36.482199  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:36.616548  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:36.628510  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:37.039771  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:37.040297  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:37.128332  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:37.399002  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:37.481655  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:37.627644  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:37.900542  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:37.981657  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:38.127698  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:38.399200  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:38.481409  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:38.628445  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:38.899393  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:38.981201  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:39.370826  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:39.372189  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:39.399948  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:39.481855  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:39.627676  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:39.898860  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:39.981735  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:40.128056  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:40.399370  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:40.481858  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:40.628636  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:40.900139  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:40.982329  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:41.130978  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:41.399499  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:41.481032  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:41.614210  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:41.627128  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:41.899422  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:41.981776  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:42.127905  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:42.398936  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:42.481585  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:42.629134  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:42.899492  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:42.982922  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:43.127672  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:43.400155  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:43.481991  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:43.615112  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:43.629339  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:43.899804  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:43.983481  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:44.127535  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:44.399564  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:44.481474  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:44.633982  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:44.899485  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:44.981347  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:45.127532  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:45.413987  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:45.481650  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:45.615259  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:45.629151  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:45.899534  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:45.981133  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:46.127626  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:46.401424  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:46.481108  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:46.626748  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:46.899481  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:46.983910  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:47.127352  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:47.400499  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:47.481216  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:47.629148  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:47.899944  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:47.981178  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:48.114820  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:48.126832  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:48.400385  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:48.481113  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:48.627340  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:48.900317  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:48.982939  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:49.440975  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:49.448941  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:49.483270  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:49.627430  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:49.899374  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:49.983132  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:50.127931  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:50.404223  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:50.482231  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:50.613962  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:50.627506  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:50.901701  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:50.981212  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:51.253571  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:51.400214  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:51.485666  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:51.628816  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:51.899909  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:51.981764  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:52.132414  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:52.400653  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:52.482230  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:52.627845  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:52.901162  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:52.981128  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:53.114152  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:53.127321  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:53.399495  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:53.480504  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:53.627259  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:53.899327  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:53.982045  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:54.126980  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:54.400103  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:54.482185  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:54.630283  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:54.899841  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:54.982038  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:55.127806  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:55.400082  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:55.482058  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:55.614659  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:55.628985  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:55.899964  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:55.981440  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:56.145450  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:56.400153  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:56.481988  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:56.627636  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:56.903212  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:56.985482  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:57.127953  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:57.405938  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:57.480991  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:57.615293  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:57.627790  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:57.899165  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:57.981629  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:58.295639  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:58.401472  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:58.480992  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:58.628426  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:58.899375  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:58.982298  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:59.128070  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:59.399507  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:59.484338  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:59.630551  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:59.636538  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:59.900561  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:59.982224  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:00.129894  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:00.399561  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:00.482729  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:00.627508  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:00.903740  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:00.981954  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:01.133438  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:01.399150  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:01.481779  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:01.630056  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:02.352725  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:02.353084  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:02.353297  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:02.357311  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:02.399678  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:02.481822  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:02.627596  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:02.899845  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:02.981411  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:03.127911  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:03.398988  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:03.481636  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:03.632574  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:03.899755  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:03.981290  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:04.128310  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:04.414840  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:04.481441  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:04.613658  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:04.629956  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:04.901144  610501 kapi.go:107] duration metric: took 1m13.506095567s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0520 12:57:04.981604  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:05.128191  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:05.481173  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:05.628513  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:05.982076  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:06.127702  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:06.481434  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:06.614389  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:06.627307  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:06.981074  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:07.127319  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:07.481753  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:07.627396  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:07.981256  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:08.127837  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:08.483769  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:08.627352  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:09.127470  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:09.132668  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:09.143694  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:09.480949  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:09.627347  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:09.982170  610501 kapi.go:107] duration metric: took 1m15.004478307s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0520 12:57:09.984996  610501 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-840762 cluster.
	I0520 12:57:09.987400  610501 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0520 12:57:09.989848  610501 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0520 12:57:10.128713  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:10.626906  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:11.126993  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:11.615193  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:11.627544  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:12.127562  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:12.627291  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:13.127538  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:13.615932  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:13.627132  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:14.129554  610501 kapi.go:107] duration metric: took 1m21.00719057s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0520 12:57:14.132384  610501 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, cloud-spanner, yakd, helm-tiller, ingress-dns, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0520 12:57:14.134475  610501 addons.go:505] duration metric: took 1m31.2134234s for enable addons: enabled=[storage-provisioner nvidia-device-plugin cloud-spanner yakd helm-tiller ingress-dns metrics-server storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0520 12:57:16.114935  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:18.615065  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:21.115704  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:23.614492  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:25.615476  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:28.115096  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:30.613576  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:32.615824  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:35.114244  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:37.114736  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:39.115280  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:41.616112  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:44.115963  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:46.613676  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:48.615457  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:49.115531  610501 pod_ready.go:92] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"True"
	I0520 12:57:49.115556  610501 pod_ready.go:81] duration metric: took 1m56.507573924s for pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace to be "Ready" ...
	I0520 12:57:49.115567  610501 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-w5d66" in "kube-system" namespace to be "Ready" ...
	I0520 12:57:49.120872  610501 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-w5d66" in "kube-system" namespace has status "Ready":"True"
	I0520 12:57:49.120891  610501 pod_ready.go:81] duration metric: took 5.316291ms for pod "nvidia-device-plugin-daemonset-w5d66" in "kube-system" namespace to be "Ready" ...
	I0520 12:57:49.120917  610501 pod_ready.go:38] duration metric: took 1m57.662965814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 12:57:49.120943  610501 api_server.go:52] waiting for apiserver process to appear ...
	I0520 12:57:49.121015  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 12:57:49.121087  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 12:57:49.196694  610501 cri.go:89] found id: "9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:49.196728  610501 cri.go:89] found id: ""
	I0520 12:57:49.196740  610501 logs.go:276] 1 containers: [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34]
	I0520 12:57:49.196806  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.201213  610501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 12:57:49.201309  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 12:57:49.261920  610501 cri.go:89] found id: "10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:49.261956  610501 cri.go:89] found id: ""
	I0520 12:57:49.261967  610501 logs.go:276] 1 containers: [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e]
	I0520 12:57:49.262042  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.265960  610501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 12:57:49.266026  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 12:57:49.311594  610501 cri.go:89] found id: "7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:49.311616  610501 cri.go:89] found id: ""
	I0520 12:57:49.311624  610501 logs.go:276] 1 containers: [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0]
	I0520 12:57:49.311677  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.315953  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 12:57:49.316040  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 12:57:49.364885  610501 cri.go:89] found id: "6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:49.364924  610501 cri.go:89] found id: ""
	I0520 12:57:49.364932  610501 logs.go:276] 1 containers: [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a]
	I0520 12:57:49.364988  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.369010  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 12:57:49.369072  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 12:57:49.424747  610501 cri.go:89] found id: "a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:49.424768  610501 cri.go:89] found id: ""
	I0520 12:57:49.424776  610501 logs.go:276] 1 containers: [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c]
	I0520 12:57:49.424834  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.428991  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 12:57:49.429080  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 12:57:49.499475  610501 cri.go:89] found id: "6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:49.499510  610501 cri.go:89] found id: ""
	I0520 12:57:49.499523  610501 logs.go:276] 1 containers: [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865]
	I0520 12:57:49.499594  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.504418  610501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 12:57:49.504502  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 12:57:49.561072  610501 cri.go:89] found id: ""
	I0520 12:57:49.561100  610501 logs.go:276] 0 containers: []
	W0520 12:57:49.561113  610501 logs.go:278] No container was found matching "kindnet"
	I0520 12:57:49.561123  610501 logs.go:123] Gathering logs for kubelet ...
	I0520 12:57:49.561138  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 12:57:49.654245  610501 logs.go:123] Gathering logs for etcd [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e] ...
	I0520 12:57:49.654289  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:49.728091  610501 logs.go:123] Gathering logs for coredns [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0] ...
	I0520 12:57:49.728129  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:49.807124  610501 logs.go:123] Gathering logs for kube-controller-manager [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865] ...
	I0520 12:57:49.807159  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:49.880558  610501 logs.go:123] Gathering logs for container status ...
	I0520 12:57:49.880602  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 12:57:49.936020  610501 logs.go:123] Gathering logs for dmesg ...
	I0520 12:57:49.936062  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 12:57:49.950180  610501 logs.go:123] Gathering logs for describe nodes ...
	I0520 12:57:49.950226  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 12:57:50.132293  610501 logs.go:123] Gathering logs for kube-apiserver [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34] ...
	I0520 12:57:50.132328  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:50.176058  610501 logs.go:123] Gathering logs for kube-scheduler [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a] ...
	I0520 12:57:50.176093  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:50.218071  610501 logs.go:123] Gathering logs for kube-proxy [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c] ...
	I0520 12:57:50.218105  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:50.255262  610501 logs.go:123] Gathering logs for CRI-O ...
	I0520 12:57:50.255300  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 12:57:53.392370  610501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:57:53.425325  610501 api_server.go:72] duration metric: took 2m10.504279951s to wait for apiserver process to appear ...
	I0520 12:57:53.425356  610501 api_server.go:88] waiting for apiserver healthz status ...
	I0520 12:57:53.425406  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 12:57:53.425466  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 12:57:53.460785  610501 cri.go:89] found id: "9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:53.460818  610501 cri.go:89] found id: ""
	I0520 12:57:53.460830  610501 logs.go:276] 1 containers: [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34]
	I0520 12:57:53.460890  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.464985  610501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 12:57:53.465054  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 12:57:53.500156  610501 cri.go:89] found id: "10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:53.500182  610501 cri.go:89] found id: ""
	I0520 12:57:53.500192  610501 logs.go:276] 1 containers: [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e]
	I0520 12:57:53.500268  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.504273  610501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 12:57:53.504349  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 12:57:53.542028  610501 cri.go:89] found id: "7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:53.542056  610501 cri.go:89] found id: ""
	I0520 12:57:53.542068  610501 logs.go:276] 1 containers: [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0]
	I0520 12:57:53.542122  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.546279  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 12:57:53.546355  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 12:57:53.583434  610501 cri.go:89] found id: "6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:53.583471  610501 cri.go:89] found id: ""
	I0520 12:57:53.583481  610501 logs.go:276] 1 containers: [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a]
	I0520 12:57:53.583549  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.587699  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 12:57:53.587757  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 12:57:53.629320  610501 cri.go:89] found id: "a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:53.629350  610501 cri.go:89] found id: ""
	I0520 12:57:53.629359  610501 logs.go:276] 1 containers: [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c]
	I0520 12:57:53.629420  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.633673  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 12:57:53.633735  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 12:57:53.670154  610501 cri.go:89] found id: "6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:53.670182  610501 cri.go:89] found id: ""
	I0520 12:57:53.670192  610501 logs.go:276] 1 containers: [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865]
	I0520 12:57:53.670259  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.674100  610501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 12:57:53.674173  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 12:57:53.711324  610501 cri.go:89] found id: ""
	I0520 12:57:53.711357  610501 logs.go:276] 0 containers: []
	W0520 12:57:53.711365  610501 logs.go:278] No container was found matching "kindnet"
	I0520 12:57:53.711380  610501 logs.go:123] Gathering logs for dmesg ...
	I0520 12:57:53.711400  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 12:57:53.730840  610501 logs.go:123] Gathering logs for describe nodes ...
	I0520 12:57:53.730875  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 12:57:53.852051  610501 logs.go:123] Gathering logs for kube-apiserver [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34] ...
	I0520 12:57:53.852082  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:53.901591  610501 logs.go:123] Gathering logs for kube-proxy [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c] ...
	I0520 12:57:53.901628  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:53.941072  610501 logs.go:123] Gathering logs for CRI-O ...
	I0520 12:57:53.941105  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 12:57:54.644393  610501 logs.go:123] Gathering logs for container status ...
	I0520 12:57:54.644441  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 12:57:54.695277  610501 logs.go:123] Gathering logs for kubelet ...
	I0520 12:57:54.695317  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 12:57:54.775974  610501 logs.go:123] Gathering logs for etcd [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e] ...
	I0520 12:57:54.776021  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:54.831859  610501 logs.go:123] Gathering logs for coredns [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0] ...
	I0520 12:57:54.831908  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:54.876969  610501 logs.go:123] Gathering logs for kube-scheduler [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a] ...
	I0520 12:57:54.877020  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:54.931426  610501 logs.go:123] Gathering logs for kube-controller-manager [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865] ...
	I0520 12:57:54.931472  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:57.491119  610501 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I0520 12:57:57.495836  610501 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I0520 12:57:57.497181  610501 api_server.go:141] control plane version: v1.30.1
	I0520 12:57:57.497205  610501 api_server.go:131] duration metric: took 4.071843024s to wait for apiserver health ...
	I0520 12:57:57.497214  610501 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 12:57:57.497235  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 12:57:57.497313  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 12:57:57.534814  610501 cri.go:89] found id: "9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:57.534847  610501 cri.go:89] found id: ""
	I0520 12:57:57.534857  610501 logs.go:276] 1 containers: [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34]
	I0520 12:57:57.534924  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.538897  610501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 12:57:57.538957  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 12:57:57.578468  610501 cri.go:89] found id: "10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:57.578502  610501 cri.go:89] found id: ""
	I0520 12:57:57.578511  610501 logs.go:276] 1 containers: [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e]
	I0520 12:57:57.578571  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.582910  610501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 12:57:57.582980  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 12:57:57.622272  610501 cri.go:89] found id: "7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:57.622294  610501 cri.go:89] found id: ""
	I0520 12:57:57.622303  610501 logs.go:276] 1 containers: [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0]
	I0520 12:57:57.622353  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.626295  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 12:57:57.626351  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 12:57:57.671885  610501 cri.go:89] found id: "6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:57.671910  610501 cri.go:89] found id: ""
	I0520 12:57:57.671918  610501 logs.go:276] 1 containers: [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a]
	I0520 12:57:57.671970  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.676755  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 12:57:57.676827  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 12:57:57.713995  610501 cri.go:89] found id: "a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:57.714014  610501 cri.go:89] found id: ""
	I0520 12:57:57.714023  610501 logs.go:276] 1 containers: [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c]
	I0520 12:57:57.714084  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.718184  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 12:57:57.718247  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 12:57:57.755752  610501 cri.go:89] found id: "6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:57.755782  610501 cri.go:89] found id: ""
	I0520 12:57:57.755793  610501 logs.go:276] 1 containers: [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865]
	I0520 12:57:57.755845  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.759887  610501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 12:57:57.759953  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 12:57:57.796173  610501 cri.go:89] found id: ""
	I0520 12:57:57.796207  610501 logs.go:276] 0 containers: []
	W0520 12:57:57.796218  610501 logs.go:278] No container was found matching "kindnet"
	I0520 12:57:57.796230  610501 logs.go:123] Gathering logs for kube-scheduler [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a] ...
	I0520 12:57:57.796243  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:57.843540  610501 logs.go:123] Gathering logs for CRI-O ...
	I0520 12:57:57.843582  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 12:57:58.695225  610501 logs.go:123] Gathering logs for kube-proxy [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c] ...
	I0520 12:57:58.695278  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:58.734177  610501 logs.go:123] Gathering logs for kube-controller-manager [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865] ...
	I0520 12:57:58.734221  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:58.798029  610501 logs.go:123] Gathering logs for kubelet ...
	I0520 12:57:58.798075  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 12:57:58.879582  610501 logs.go:123] Gathering logs for dmesg ...
	I0520 12:57:58.879638  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 12:57:58.894417  610501 logs.go:123] Gathering logs for describe nodes ...
	I0520 12:57:58.894467  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 12:57:59.011252  610501 logs.go:123] Gathering logs for kube-apiserver [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34] ...
	I0520 12:57:59.011297  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:59.058509  610501 logs.go:123] Gathering logs for etcd [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e] ...
	I0520 12:57:59.058547  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:59.120006  610501 logs.go:123] Gathering logs for coredns [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0] ...
	I0520 12:57:59.120045  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:59.157503  610501 logs.go:123] Gathering logs for container status ...
	I0520 12:57:59.157537  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 12:58:01.712083  610501 system_pods.go:59] 18 kube-system pods found
	I0520 12:58:01.712116  610501 system_pods.go:61] "coredns-7db6d8ff4d-vp4g8" [b9838e64-b32b-489f-8944-3a29c87892a6] Running
	I0520 12:58:01.712121  610501 system_pods.go:61] "csi-hostpath-attacher-0" [382113f8-2f09-4b46-964e-9a898b8cde1a] Running
	I0520 12:58:01.712124  610501 system_pods.go:61] "csi-hostpath-resizer-0" [f25c7026-9336-4c3b-baa9-382b164e4060] Running
	I0520 12:58:01.712127  610501 system_pods.go:61] "csi-hostpathplugin-k4gtt" [1b5b12d1-1c43-4122-9b62-f05cc49ba29c] Running
	I0520 12:58:01.712130  610501 system_pods.go:61] "etcd-addons-840762" [7e4a944d-05a8-49fc-b415-b912821c0b95] Running
	I0520 12:58:01.712133  610501 system_pods.go:61] "kube-apiserver-addons-840762" [5d4315b9-e854-4790-a1ff-e2749c9a4986] Running
	I0520 12:58:01.712136  610501 system_pods.go:61] "kube-controller-manager-addons-840762" [113efbaf-3b1e-471f-99fa-700614bf583d] Running
	I0520 12:58:01.712138  610501 system_pods.go:61] "kube-ingress-dns-minikube" [c057ec77-ddf8-4ad7-9001-a7b4f48a2d00] Running
	I0520 12:58:01.712141  610501 system_pods.go:61] "kube-proxy-mpkr9" [d7a0dc50-43c6-4927-9c13-45e9104e2206] Running
	I0520 12:58:01.712144  610501 system_pods.go:61] "kube-scheduler-addons-840762" [f4f8cee3-7755-409f-86fc-c558934af287] Running
	I0520 12:58:01.712146  610501 system_pods.go:61] "metrics-server-c59844bb4-8g977" [2f766954-b3a4-4592-865f-b37297fefae7] Running
	I0520 12:58:01.712149  610501 system_pods.go:61] "nvidia-device-plugin-daemonset-w5d66" [88344eab-652a-4d9d-9f7f-171aa2936225] Running
	I0520 12:58:01.712152  610501 system_pods.go:61] "registry-jwvq5" [11f262c9-d0cf-456f-bfd1-fa66f364ffaf] Running
	I0520 12:58:01.712154  610501 system_pods.go:61] "registry-proxy-xpxjv" [ca35b86e-6424-40e0-a0d6-cbd41f0ccab0] Running
	I0520 12:58:01.712157  610501 system_pods.go:61] "snapshot-controller-745499f584-h6pwb" [09a87307-3db0-4409-a938-045a643b3019] Running
	I0520 12:58:01.712160  610501 system_pods.go:61] "snapshot-controller-745499f584-tskjh" [68e4661d-25a9-4ea9-aca7-01ab30e83701] Running
	I0520 12:58:01.712164  610501 system_pods.go:61] "storage-provisioner" [0af02429-e13b-4886-993d-0d7815e2fb69] Running
	I0520 12:58:01.712169  610501 system_pods.go:61] "tiller-deploy-6677d64bcd-9z85l" [a58791b3-4277-403d-9b31-4f938890905e] Running
	I0520 12:58:01.712174  610501 system_pods.go:74] duration metric: took 4.214955142s to wait for pod list to return data ...
	I0520 12:58:01.712182  610501 default_sa.go:34] waiting for default service account to be created ...
	I0520 12:58:01.714213  610501 default_sa.go:45] found service account: "default"
	I0520 12:58:01.714230  610501 default_sa.go:55] duration metric: took 2.042647ms for default service account to be created ...
	I0520 12:58:01.714236  610501 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 12:58:01.722252  610501 system_pods.go:86] 18 kube-system pods found
	I0520 12:58:01.722281  610501 system_pods.go:89] "coredns-7db6d8ff4d-vp4g8" [b9838e64-b32b-489f-8944-3a29c87892a6] Running
	I0520 12:58:01.722287  610501 system_pods.go:89] "csi-hostpath-attacher-0" [382113f8-2f09-4b46-964e-9a898b8cde1a] Running
	I0520 12:58:01.722291  610501 system_pods.go:89] "csi-hostpath-resizer-0" [f25c7026-9336-4c3b-baa9-382b164e4060] Running
	I0520 12:58:01.722296  610501 system_pods.go:89] "csi-hostpathplugin-k4gtt" [1b5b12d1-1c43-4122-9b62-f05cc49ba29c] Running
	I0520 12:58:01.722312  610501 system_pods.go:89] "etcd-addons-840762" [7e4a944d-05a8-49fc-b415-b912821c0b95] Running
	I0520 12:58:01.722317  610501 system_pods.go:89] "kube-apiserver-addons-840762" [5d4315b9-e854-4790-a1ff-e2749c9a4986] Running
	I0520 12:58:01.722321  610501 system_pods.go:89] "kube-controller-manager-addons-840762" [113efbaf-3b1e-471f-99fa-700614bf583d] Running
	I0520 12:58:01.722325  610501 system_pods.go:89] "kube-ingress-dns-minikube" [c057ec77-ddf8-4ad7-9001-a7b4f48a2d00] Running
	I0520 12:58:01.722329  610501 system_pods.go:89] "kube-proxy-mpkr9" [d7a0dc50-43c6-4927-9c13-45e9104e2206] Running
	I0520 12:58:01.722333  610501 system_pods.go:89] "kube-scheduler-addons-840762" [f4f8cee3-7755-409f-86fc-c558934af287] Running
	I0520 12:58:01.722340  610501 system_pods.go:89] "metrics-server-c59844bb4-8g977" [2f766954-b3a4-4592-865f-b37297fefae7] Running
	I0520 12:58:01.722344  610501 system_pods.go:89] "nvidia-device-plugin-daemonset-w5d66" [88344eab-652a-4d9d-9f7f-171aa2936225] Running
	I0520 12:58:01.722350  610501 system_pods.go:89] "registry-jwvq5" [11f262c9-d0cf-456f-bfd1-fa66f364ffaf] Running
	I0520 12:58:01.722354  610501 system_pods.go:89] "registry-proxy-xpxjv" [ca35b86e-6424-40e0-a0d6-cbd41f0ccab0] Running
	I0520 12:58:01.722360  610501 system_pods.go:89] "snapshot-controller-745499f584-h6pwb" [09a87307-3db0-4409-a938-045a643b3019] Running
	I0520 12:58:01.722364  610501 system_pods.go:89] "snapshot-controller-745499f584-tskjh" [68e4661d-25a9-4ea9-aca7-01ab30e83701] Running
	I0520 12:58:01.722370  610501 system_pods.go:89] "storage-provisioner" [0af02429-e13b-4886-993d-0d7815e2fb69] Running
	I0520 12:58:01.722376  610501 system_pods.go:89] "tiller-deploy-6677d64bcd-9z85l" [a58791b3-4277-403d-9b31-4f938890905e] Running
	I0520 12:58:01.722382  610501 system_pods.go:126] duration metric: took 8.141251ms to wait for k8s-apps to be running ...
	I0520 12:58:01.722391  610501 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 12:58:01.722435  610501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:58:01.736978  610501 system_svc.go:56] duration metric: took 14.575937ms WaitForService to wait for kubelet
	I0520 12:58:01.737014  610501 kubeadm.go:576] duration metric: took 2m18.815967987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:58:01.737035  610501 node_conditions.go:102] verifying NodePressure condition ...
	I0520 12:58:01.740116  610501 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 12:58:01.740145  610501 node_conditions.go:123] node cpu capacity is 2
	I0520 12:58:01.740159  610501 node_conditions.go:105] duration metric: took 3.120029ms to run NodePressure ...
	I0520 12:58:01.740172  610501 start.go:240] waiting for startup goroutines ...
	I0520 12:58:01.740179  610501 start.go:245] waiting for cluster config update ...
	I0520 12:58:01.740195  610501 start.go:254] writing updated cluster config ...
	I0520 12:58:01.740485  610501 ssh_runner.go:195] Run: rm -f paused
	I0520 12:58:01.793273  610501 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 12:58:01.796159  610501 out.go:177] * Done! kubectl is now configured to use "addons-840762" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.731790332Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716210079731765456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584529,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a423eeb-fb9a-4084-b647-57d6b731fee9 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.732395233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b783627-afd5-4020-8255-30116af0a80c name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.732449954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b783627-afd5-4020-8255-30116af0a80c name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.732785651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d72dc52b7185c345946265aa837801bc59e53c72f43dbe5c4f0566cee5e561b9,PodSandboxId:dbca97dc90a2f36f8a4257173cfc780d00d43448b3c3df69e147c56e45cd6294,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716210072603263978,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-cfg4n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9200c5f6-46f7-480c-a3a2-d0d9fa3ca5bc,},Annotations:map[string]string{io.kubernetes.container.hash: 18d692d,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c59cdf2dc6a09d9d755a7e29d09869853667e39761314195a525ca06f1dc35,PodSandboxId:fe89ff446540b375ad9a8700890a0ee7abe5caea2c47f19686fac6f8d3d12784,Metadata:&ContainerMetadata{Name:gadget,Attempt:5,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a8279285b9649c62230b8c882ba99e644d3fe8922fb14b53692633322555df8,State:CONTAINER_EXITED,CreatedAt:1716209996981078132,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-4r2zg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 20112e09-b29e-4ddb-96ef-4d06088304a4,},Annotations:map
[string]string{io.kubernetes.container.hash: ce96a3ac,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0affd33442670b381ad0ac8af56bea187d55b8d77ecb6f702f331e6d7cd27a80,PodSandboxId:d99df72c564719e50a49cf57467abeb86c8818263fc64a94a0497147a84174e2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716209931120181872,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kube
rnetes.pod.uid: efadd51f-18e9-48cb-bc58-103881fd9263,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac930cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382f55ae91d16dc6ab148279bf397cd687355480ad2539082316aa7dd601ef94,PodSandboxId:fd3c7dc9776c3eeff41b31b93f2a49af94cd0716954b72ec1f86161902f9ceb9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716209919405672036,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernete
s.pod.name: headlamp-68456f997b-5k6z6,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: c7973bac-822b-4c44-a10c-65bcfdb5f17d,},Annotations:map[string]string{io.kubernetes.container.hash: 98021e3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135a96f190c99f5e260335c022f15f4c6f9bbea70184e9e733be8cff9027ea7c,PodSandboxId:6684887cb09aaf19d350757b7738b384a8240762fa9f58af2e34005a4a40b9b8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNN
ING,CreatedAt:1716209829223861781,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-cjjrn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6847135c-da26-4866-92c6-81b6e53be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: cef7ac2c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:120b275a98f0ebf77b544df46ac2d36888943e39473a2da39018759b246c07df,PodSandboxId:18e6d4697d58e9344944fc76df9d677a3adf550e7afe011bf2ba0b9c5385bf87,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716209810689078572,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xpvg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a441bce-20c7-4f19-b940-4cb826784cea,},Annotations:map[string]string{io.kubernetes.container.hash: 9d906670,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b7f95bfe63215f39e44f4013e882d6e05871185c5bc7ecac61c506f49452d1,PodSandboxId:83f3bee4f32fc75bfa47e3638164f0186d5672c5607b70e50700d0bf0acee69e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c
5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716209809701371022,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lglgg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 901789e2-d702-40f6-a420-a1d24db58a4e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a24b05e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c,PodSandboxId:354aac86fd4a4665bf8afc7fe6bd167e24d314d8f0d13397a2aeebfe19cfd9df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716209801955365316,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-8g977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f766954-b3a4-4592-865f-b37297fefae7,},Annotations:map[string]string{io.kubernetes.container.hash: feff7df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5640739ae135dbb8daed59ce1e4f39a4905cdd7c978310c3867d5cc70d02a655,PodSandboxId:47557659a5b0aed467988a3d8f68654d6895db6c19735e9a9f4609c5d26f8688,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:
a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1716209799507008718,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-hgp7b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 98ccbc95-97f1-48f6-99a4-6c335bd4b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 21e85f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fcce271acb34244cd7f2a77899ca2646459cb60e24733226b032214322c3ba,PodSandboxId:b221456a6d2ca4745db530432ec94e569009cf09fd601a90163006e36d4d4d57,Meta
data:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ff6c6518681d96dab31ba95ce65298861a2e2e2d5b2afbb168e6da22563c13d,State:CONTAINER_RUNNING,CreatedAt:1716209775813947254,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6fcd4f6f98-tzksc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 14c3ddef-1fef-49b7-84cc-6d33520ba034,},Annotations:map[string]string{io.kubernetes.container.hash: 6f3fb5ec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e66ec7f2ae77116414c457358da2b30db464a9cef54c6d94abb3330231e0638,PodSandboxId:345c392f5452dbde2b2794b633b10f9f223744240388bd1a1382a7f432edbde9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209748574401075,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0af02429-e13b-4886-993d-0d7815e2fb69,},Annotations:map[string]string{io.kubernetes.container.hash: a6b6c5f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0,PodSandboxId:79eaca6d020368574c107991ba037d147a5e6759ce595522b190b3208048e31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209745620204527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vp4g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9838e64-b32b-489f-8944-3a29c87892a6,},Annotations:map[string]string{io.kubernetes.container.hash: 49e4769a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":91
53,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c,PodSandboxId:e6b145e6b7a46c742fc5409c92716b9ead3d1a007fe901777100e3e904d089f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209742892346346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpkr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a0dc50-43c6-4927-9c13-45e9104e2206,},Annotations:map[string]string{io.kubernetes.container.hash: e884271,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e,PodSandboxId:a496785b5b5f53d70bf2eae6408569665774f695bf07f2aeec756a75513b9a74,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209724111327099,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc20fa6c7f57dfba2ef2611768216c5c,},Annotations:map[string]string{io.kubernetes.container.hash: e0715a78,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a,PodSandboxId:31de9fbe23d9ba41772d69015e4518cb70c8cdc7386fbac20bac4b64809284c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209724041565547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4bb06d2b47c119024d856c02f66b4d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865,PodSandboxId:ef74fd5cfc67f15c584b795a04fe940f1036e1a37553be890a44cabfd410c9f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209724027576770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb210768643f1d2a3f5e71d39e6100ee,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34,PodSandboxId:d591c03b18dc1366ee18164ecc1c47eeaa8739d08ea95b335348aa29b028f291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209723992074280,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25de9c545ad63a4181a22d9d16ed13c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1f6497a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b783627-afd5-4020-8255-30116af0a80c name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.764955247Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc213c60-5e34-458b-9a58-f544f3114f8b name=/runtime.v1.RuntimeService/Version
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.765027944Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc213c60-5e34-458b-9a58-f544f3114f8b name=/runtime.v1.RuntimeService/Version
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.766286892Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f213eb6f-b253-4a11-b54a-0273b686df32 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.767507601Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716210079767479971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584529,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f213eb6f-b253-4a11-b54a-0273b686df32 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.767950773Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1f57237-3653-401a-a932-3d7d8f5af2e3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.768007644Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1f57237-3653-401a-a932-3d7d8f5af2e3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.768430614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d72dc52b7185c345946265aa837801bc59e53c72f43dbe5c4f0566cee5e561b9,PodSandboxId:dbca97dc90a2f36f8a4257173cfc780d00d43448b3c3df69e147c56e45cd6294,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716210072603263978,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-cfg4n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9200c5f6-46f7-480c-a3a2-d0d9fa3ca5bc,},Annotations:map[string]string{io.kubernetes.container.hash: 18d692d,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c59cdf2dc6a09d9d755a7e29d09869853667e39761314195a525ca06f1dc35,PodSandboxId:fe89ff446540b375ad9a8700890a0ee7abe5caea2c47f19686fac6f8d3d12784,Metadata:&ContainerMetadata{Name:gadget,Attempt:5,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a8279285b9649c62230b8c882ba99e644d3fe8922fb14b53692633322555df8,State:CONTAINER_EXITED,CreatedAt:1716209996981078132,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-4r2zg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 20112e09-b29e-4ddb-96ef-4d06088304a4,},Annotations:map
[string]string{io.kubernetes.container.hash: ce96a3ac,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0affd33442670b381ad0ac8af56bea187d55b8d77ecb6f702f331e6d7cd27a80,PodSandboxId:d99df72c564719e50a49cf57467abeb86c8818263fc64a94a0497147a84174e2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716209931120181872,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kube
rnetes.pod.uid: efadd51f-18e9-48cb-bc58-103881fd9263,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac930cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382f55ae91d16dc6ab148279bf397cd687355480ad2539082316aa7dd601ef94,PodSandboxId:fd3c7dc9776c3eeff41b31b93f2a49af94cd0716954b72ec1f86161902f9ceb9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716209919405672036,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernete
s.pod.name: headlamp-68456f997b-5k6z6,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: c7973bac-822b-4c44-a10c-65bcfdb5f17d,},Annotations:map[string]string{io.kubernetes.container.hash: 98021e3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135a96f190c99f5e260335c022f15f4c6f9bbea70184e9e733be8cff9027ea7c,PodSandboxId:6684887cb09aaf19d350757b7738b384a8240762fa9f58af2e34005a4a40b9b8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNN
ING,CreatedAt:1716209829223861781,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-cjjrn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6847135c-da26-4866-92c6-81b6e53be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: cef7ac2c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:120b275a98f0ebf77b544df46ac2d36888943e39473a2da39018759b246c07df,PodSandboxId:18e6d4697d58e9344944fc76df9d677a3adf550e7afe011bf2ba0b9c5385bf87,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716209810689078572,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xpvg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a441bce-20c7-4f19-b940-4cb826784cea,},Annotations:map[string]string{io.kubernetes.container.hash: 9d906670,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b7f95bfe63215f39e44f4013e882d6e05871185c5bc7ecac61c506f49452d1,PodSandboxId:83f3bee4f32fc75bfa47e3638164f0186d5672c5607b70e50700d0bf0acee69e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c
5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716209809701371022,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lglgg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 901789e2-d702-40f6-a420-a1d24db58a4e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a24b05e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c,PodSandboxId:354aac86fd4a4665bf8afc7fe6bd167e24d314d8f0d13397a2aeebfe19cfd9df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716209801955365316,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-8g977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f766954-b3a4-4592-865f-b37297fefae7,},Annotations:map[string]string{io.kubernetes.container.hash: feff7df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5640739ae135dbb8daed59ce1e4f39a4905cdd7c978310c3867d5cc70d02a655,PodSandboxId:47557659a5b0aed467988a3d8f68654d6895db6c19735e9a9f4609c5d26f8688,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:
a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1716209799507008718,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-hgp7b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 98ccbc95-97f1-48f6-99a4-6c335bd4b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 21e85f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fcce271acb34244cd7f2a77899ca2646459cb60e24733226b032214322c3ba,PodSandboxId:b221456a6d2ca4745db530432ec94e569009cf09fd601a90163006e36d4d4d57,Meta
data:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ff6c6518681d96dab31ba95ce65298861a2e2e2d5b2afbb168e6da22563c13d,State:CONTAINER_RUNNING,CreatedAt:1716209775813947254,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6fcd4f6f98-tzksc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 14c3ddef-1fef-49b7-84cc-6d33520ba034,},Annotations:map[string]string{io.kubernetes.container.hash: 6f3fb5ec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e66ec7f2ae77116414c457358da2b30db464a9cef54c6d94abb3330231e0638,PodSandboxId:345c392f5452dbde2b2794b633b10f9f223744240388bd1a1382a7f432edbde9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209748574401075,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0af02429-e13b-4886-993d-0d7815e2fb69,},Annotations:map[string]string{io.kubernetes.container.hash: a6b6c5f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0,PodSandboxId:79eaca6d020368574c107991ba037d147a5e6759ce595522b190b3208048e31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209745620204527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vp4g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9838e64-b32b-489f-8944-3a29c87892a6,},Annotations:map[string]string{io.kubernetes.container.hash: 49e4769a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":91
53,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c,PodSandboxId:e6b145e6b7a46c742fc5409c92716b9ead3d1a007fe901777100e3e904d089f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209742892346346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpkr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a0dc50-43c6-4927-9c13-45e9104e2206,},Annotations:map[string]string{io.kubernetes.container.hash: e884271,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e,PodSandboxId:a496785b5b5f53d70bf2eae6408569665774f695bf07f2aeec756a75513b9a74,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209724111327099,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc20fa6c7f57dfba2ef2611768216c5c,},Annotations:map[string]string{io.kubernetes.container.hash: e0715a78,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a,PodSandboxId:31de9fbe23d9ba41772d69015e4518cb70c8cdc7386fbac20bac4b64809284c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209724041565547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4bb06d2b47c119024d856c02f66b4d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865,PodSandboxId:ef74fd5cfc67f15c584b795a04fe940f1036e1a37553be890a44cabfd410c9f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209724027576770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb210768643f1d2a3f5e71d39e6100ee,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34,PodSandboxId:d591c03b18dc1366ee18164ecc1c47eeaa8739d08ea95b335348aa29b028f291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209723992074280,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25de9c545ad63a4181a22d9d16ed13c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1f6497a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1f57237-3653-401a-a932-3d7d8f5af2e3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.797744992Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=55eab329-bbc9-402f-bf33-b5eed4fa6c2f name=/runtime.v1.RuntimeService/Status
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.797835423Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=55eab329-bbc9-402f-bf33-b5eed4fa6c2f name=/runtime.v1.RuntimeService/Status
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.803969269Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=efcbcb1b-b7b2-4eb9-844d-4d38498e0e11 name=/runtime.v1.RuntimeService/Version
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.804055005Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=efcbcb1b-b7b2-4eb9-844d-4d38498e0e11 name=/runtime.v1.RuntimeService/Version
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.804889860Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2fa33579-a753-4814-840a-be877da4b1c0 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.806380669Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716210079806354399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584529,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2fa33579-a753-4814-840a-be877da4b1c0 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.806902803Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03243cf3-0c10-4b58-a7c2-bccd5b61eb10 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.806970858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03243cf3-0c10-4b58-a7c2-bccd5b61eb10 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.807384005Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d72dc52b7185c345946265aa837801bc59e53c72f43dbe5c4f0566cee5e561b9,PodSandboxId:dbca97dc90a2f36f8a4257173cfc780d00d43448b3c3df69e147c56e45cd6294,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716210072603263978,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-cfg4n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9200c5f6-46f7-480c-a3a2-d0d9fa3ca5bc,},Annotations:map[string]string{io.kubernetes.container.hash: 18d692d,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c59cdf2dc6a09d9d755a7e29d09869853667e39761314195a525ca06f1dc35,PodSandboxId:fe89ff446540b375ad9a8700890a0ee7abe5caea2c47f19686fac6f8d3d12784,Metadata:&ContainerMetadata{Name:gadget,Attempt:5,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a8279285b9649c62230b8c882ba99e644d3fe8922fb14b53692633322555df8,State:CONTAINER_EXITED,CreatedAt:1716209996981078132,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-4r2zg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 20112e09-b29e-4ddb-96ef-4d06088304a4,},Annotations:map
[string]string{io.kubernetes.container.hash: ce96a3ac,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0affd33442670b381ad0ac8af56bea187d55b8d77ecb6f702f331e6d7cd27a80,PodSandboxId:d99df72c564719e50a49cf57467abeb86c8818263fc64a94a0497147a84174e2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716209931120181872,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kube
rnetes.pod.uid: efadd51f-18e9-48cb-bc58-103881fd9263,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac930cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382f55ae91d16dc6ab148279bf397cd687355480ad2539082316aa7dd601ef94,PodSandboxId:fd3c7dc9776c3eeff41b31b93f2a49af94cd0716954b72ec1f86161902f9ceb9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716209919405672036,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernete
s.pod.name: headlamp-68456f997b-5k6z6,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: c7973bac-822b-4c44-a10c-65bcfdb5f17d,},Annotations:map[string]string{io.kubernetes.container.hash: 98021e3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135a96f190c99f5e260335c022f15f4c6f9bbea70184e9e733be8cff9027ea7c,PodSandboxId:6684887cb09aaf19d350757b7738b384a8240762fa9f58af2e34005a4a40b9b8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNN
ING,CreatedAt:1716209829223861781,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-cjjrn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6847135c-da26-4866-92c6-81b6e53be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: cef7ac2c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:120b275a98f0ebf77b544df46ac2d36888943e39473a2da39018759b246c07df,PodSandboxId:18e6d4697d58e9344944fc76df9d677a3adf550e7afe011bf2ba0b9c5385bf87,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716209810689078572,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xpvg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a441bce-20c7-4f19-b940-4cb826784cea,},Annotations:map[string]string{io.kubernetes.container.hash: 9d906670,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b7f95bfe63215f39e44f4013e882d6e05871185c5bc7ecac61c506f49452d1,PodSandboxId:83f3bee4f32fc75bfa47e3638164f0186d5672c5607b70e50700d0bf0acee69e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c
5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716209809701371022,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lglgg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 901789e2-d702-40f6-a420-a1d24db58a4e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a24b05e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c,PodSandboxId:354aac86fd4a4665bf8afc7fe6bd167e24d314d8f0d13397a2aeebfe19cfd9df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716209801955365316,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-8g977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f766954-b3a4-4592-865f-b37297fefae7,},Annotations:map[string]string{io.kubernetes.container.hash: feff7df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5640739ae135dbb8daed59ce1e4f39a4905cdd7c978310c3867d5cc70d02a655,PodSandboxId:47557659a5b0aed467988a3d8f68654d6895db6c19735e9a9f4609c5d26f8688,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:
a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1716209799507008718,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-hgp7b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 98ccbc95-97f1-48f6-99a4-6c335bd4b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 21e85f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fcce271acb34244cd7f2a77899ca2646459cb60e24733226b032214322c3ba,PodSandboxId:b221456a6d2ca4745db530432ec94e569009cf09fd601a90163006e36d4d4d57,Meta
data:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ff6c6518681d96dab31ba95ce65298861a2e2e2d5b2afbb168e6da22563c13d,State:CONTAINER_RUNNING,CreatedAt:1716209775813947254,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6fcd4f6f98-tzksc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 14c3ddef-1fef-49b7-84cc-6d33520ba034,},Annotations:map[string]string{io.kubernetes.container.hash: 6f3fb5ec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e66ec7f2ae77116414c457358da2b30db464a9cef54c6d94abb3330231e0638,PodSandboxId:345c392f5452dbde2b2794b633b10f9f223744240388bd1a1382a7f432edbde9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209748574401075,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0af02429-e13b-4886-993d-0d7815e2fb69,},Annotations:map[string]string{io.kubernetes.container.hash: a6b6c5f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0,PodSandboxId:79eaca6d020368574c107991ba037d147a5e6759ce595522b190b3208048e31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209745620204527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vp4g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9838e64-b32b-489f-8944-3a29c87892a6,},Annotations:map[string]string{io.kubernetes.container.hash: 49e4769a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":91
53,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c,PodSandboxId:e6b145e6b7a46c742fc5409c92716b9ead3d1a007fe901777100e3e904d089f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209742892346346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpkr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a0dc50-43c6-4927-9c13-45e9104e2206,},Annotations:map[string]string{io.kubernetes.container.hash: e884271,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e,PodSandboxId:a496785b5b5f53d70bf2eae6408569665774f695bf07f2aeec756a75513b9a74,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209724111327099,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc20fa6c7f57dfba2ef2611768216c5c,},Annotations:map[string]string{io.kubernetes.container.hash: e0715a78,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a,PodSandboxId:31de9fbe23d9ba41772d69015e4518cb70c8cdc7386fbac20bac4b64809284c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209724041565547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4bb06d2b47c119024d856c02f66b4d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865,PodSandboxId:ef74fd5cfc67f15c584b795a04fe940f1036e1a37553be890a44cabfd410c9f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209724027576770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb210768643f1d2a3f5e71d39e6100ee,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34,PodSandboxId:d591c03b18dc1366ee18164ecc1c47eeaa8739d08ea95b335348aa29b028f291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209723992074280,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25de9c545ad63a4181a22d9d16ed13c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1f6497a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03243cf3-0c10-4b58-a7c2-bccd5b61eb10 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.833907739Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=ec84d2f3-502c-4f90-9388-c385104ea8e4 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.834498006Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:dbca97dc90a2f36f8a4257173cfc780d00d43448b3c3df69e147c56e45cd6294,Metadata:&PodSandboxMetadata{Name:hello-world-app-86c47465fc-cfg4n,Uid:9200c5f6-46f7-480c-a3a2-d0d9fa3ca5bc,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716210069342361800,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-86c47465fc-cfg4n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9200c5f6-46f7-480c-a3a2-d0d9fa3ca5bc,pod-template-hash: 86c47465fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T13:01:09.026057079Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d99df72c564719e50a49cf57467abeb86c8818263fc64a94a0497147a84174e2,Metadata:&PodSandboxMetadata{Name:nginx,Uid:efadd51f-18e9-48cb-bc58-103881fd9263,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1716209926835092763,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: efadd51f-18e9-48cb-bc58-103881fd9263,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:58:46.521202680Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fd3c7dc9776c3eeff41b31b93f2a49af94cd0716954b72ec1f86161902f9ceb9,Metadata:&PodSandboxMetadata{Name:headlamp-68456f997b-5k6z6,Uid:c7973bac-822b-4c44-a10c-65bcfdb5f17d,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716209913440697647,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-68456f997b-5k6z6,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: c7973bac-822b-4c44-a10c-65bcfdb5f17d,pod-template-hash: 68456f997b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
05-20T12:58:33.101750697Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6684887cb09aaf19d350757b7738b384a8240762fa9f58af2e34005a4a40b9b8,Metadata:&PodSandboxMetadata{Name:gcp-auth-5db96cd9b4-cjjrn,Uid:6847135c-da26-4866-92c6-81b6e53be1a8,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716209819158223018,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-cjjrn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6847135c-da26-4866-92c6-81b6e53be1a8,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 5db96cd9b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:55:54.889904685Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:820ef6d8fc1bcb391b65a6d9c531222e22002de91963d50d1741dcb8e0567d60,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-768f948f8f-5dgl9,Uid:156342ec-3be6-4be5-9629-f89ca1ee418b,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NO
TREADY,CreatedAt:1716209815122772853,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-768f948f8f-5dgl9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 156342ec-3be6-4be5-9629-f89ca1ee418b,pod-template-hash: 768f948f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:55:51.207562550Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:18e6d4697d58e9344944fc76df9d677a3adf550e7afe011bf2ba0b9c5385bf87,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-xpvg2,Uid:7a441bce-20c7-4f19-b940-4cb826784cea,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716209753292902222,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.k
ubernetes.io/controller-uid: 71ad64c8-4bb5-46ec-9962-20efaf741a89,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 71ad64c8-4bb5-46ec-9962-20efaf741a89,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-xpvg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a441bce-20c7-4f19-b940-4cb826784cea,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:55:51.317679870Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:83f3bee4f32fc75bfa47e3638164f0186d5672c5607b70e50700d0bf0acee69e,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-lglgg,Uid:901789e2-d702-40f6-a420-a1d24db58a4e,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716209753195420550,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-u
id: 07ca8b10-5e6f-4b4a-99c2-1ed562e0fa3d,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 07ca8b10-5e6f-4b4a-99c2-1ed562e0fa3d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-lglgg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 901789e2-d702-40f6-a420-a1d24db58a4e,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:55:51.285194709Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe89ff446540b375ad9a8700890a0ee7abe5caea2c47f19686fac6f8d3d12784,Metadata:&PodSandboxMetadata{Name:gadget-4r2zg,Uid:20112e09-b29e-4ddb-96ef-4d06088304a4,Namespace:gadget,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716209749912437824,Labels:map[string]string{controller-revision-hash: 674bb46ff7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gadget-4r2zg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 20112e09-b29e-4ddb-96ef-4d06088304a4,k8s-app: ga
dget,pod-template-generation: 1,},Annotations:map[string]string{container.apparmor.security.beta.kubernetes.io/gadget: unconfined,inspektor-gadget.kinvolk.io/option-hook-mode: auto,kubernetes.io/config.seen: 2024-05-20T12:55:49.234895725Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47557659a5b0aed467988a3d8f68654d6895db6c19735e9a9f4609c5d26f8688,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-5ddbf7d777-hgp7b,Uid:98ccbc95-97f1-48f6-99a4-6c335bd4b99d,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716209749274559568,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-hgp7b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 98ccbc95-97f1-48f6-99a4-6c335bd4b99d,pod-template-hash: 5ddbf7d777,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:55:48.960760536Z,kubernetes
.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:354aac86fd4a4665bf8afc7fe6bd167e24d314d8f0d13397a2aeebfe19cfd9df,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-8g977,Uid:2f766954-b3a4-4592-865f-b37297fefae7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716209748942830283,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-8g977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f766954-b3a4-4592-865f-b37297fefae7,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:55:48.280698083Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:345c392f5452dbde2b2794b633b10f9f223744240388bd1a1382a7f432edbde9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0af02429-e13b-4886-993d-0d7815e2fb69,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716209747577301255,Labels:map[string]string{addonmanager.kub
ernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0af02429-e13b-4886-993d-0d7815e2fb69,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-2
0T12:55:47.265196849Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:238da6830db9a2368fabac6ea0dfd481c8ded1607f30cfa736627a52807a32a9,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:c057ec77-ddf8-4ad7-9001-a7b4f48a2d00,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716209747294391541,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c057ec77-ddf8-4ad7-9001-a7b4f48a2d00,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"
POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-05-20T12:55:46.978028767Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b221456a6d2ca4745db530432ec94e569009cf09fd601a90163006e36d4d4d57,Metadata:&PodSandboxMetadata{Name:cloud-spanner-emulator-6fcd4f6f98-tzksc,Uid:14c3ddef-1fef-49b7-84cc-6d33520ba034,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716209746755523826,Labels:map[string]string{app: cloud-spanner-emulator,io.kubernetes.container.name: POD,io.kubernetes.pod.name: cloud-spanner-emulator-6fcd4f6f98-tzksc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 14c3ddef-1fef-49b7-
84cc-6d33520ba034,pod-template-hash: 6fcd4f6f98,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:55:46.143644021Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:79eaca6d020368574c107991ba037d147a5e6759ce595522b190b3208048e31d,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-vp4g8,Uid:b9838e64-b32b-489f-8944-3a29c87892a6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716209743252975310,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-vp4g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9838e64-b32b-489f-8944-3a29c87892a6,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:55:42.916454419Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e6b145e6b7a46c742fc5409c92716b9ead3d1a007fe901777100e3e904d089f1,Metadata:&PodSandboxMetadata{Name:kube-proxy-mpkr9,Uid:d7a0dc50-43c6-4927-9c13-45e9104e22
06,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716209742741830364,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mpkr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a0dc50-43c6-4927-9c13-45e9104e2206,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:55:41.828658769Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:31de9fbe23d9ba41772d69015e4518cb70c8cdc7386fbac20bac4b64809284c4,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-840762,Uid:1a4bb06d2b47c119024d856c02f66b4d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716209723858822169,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4bb06d2b47c119024d856c02f66b4d,tier: cont
rol-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1a4bb06d2b47c119024d856c02f66b4d,kubernetes.io/config.seen: 2024-05-20T12:55:22.793502368Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d591c03b18dc1366ee18164ecc1c47eeaa8739d08ea95b335348aa29b028f291,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-840762,Uid:25de9c545ad63a4181a22d9d16ed13c1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716209723857988644,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25de9c545ad63a4181a22d9d16ed13c1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.19:8443,kubernetes.io/config.hash: 25de9c545ad63a4181a22d9d16ed13c1,kubernetes.io/config.seen: 2024-05-20T12:55:22.793500125Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&Po
dSandbox{Id:a496785b5b5f53d70bf2eae6408569665774f695bf07f2aeec756a75513b9a74,Metadata:&PodSandboxMetadata{Name:etcd-addons-840762,Uid:dc20fa6c7f57dfba2ef2611768216c5c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716209723857621157,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc20fa6c7f57dfba2ef2611768216c5c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.19:2379,kubernetes.io/config.hash: dc20fa6c7f57dfba2ef2611768216c5c,kubernetes.io/config.seen: 2024-05-20T12:55:22.793494686Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ef74fd5cfc67f15c584b795a04fe940f1036e1a37553be890a44cabfd410c9f9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-840762,Uid:cb210768643f1d2a3f5e71d39e6100ee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716
209723843304242,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb210768643f1d2a3f5e71d39e6100ee,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: cb210768643f1d2a3f5e71d39e6100ee,kubernetes.io/config.seen: 2024-05-20T12:55:22.793501402Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ec84d2f3-502c-4f90-9388-c385104ea8e4 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.835642687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38dd64bf-4669-4309-a9f2-57a84f3b67b7 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.835716701Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38dd64bf-4669-4309-a9f2-57a84f3b67b7 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:01:19 addons-840762 crio[679]: time="2024-05-20 13:01:19.836087792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d72dc52b7185c345946265aa837801bc59e53c72f43dbe5c4f0566cee5e561b9,PodSandboxId:dbca97dc90a2f36f8a4257173cfc780d00d43448b3c3df69e147c56e45cd6294,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716210072603263978,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-cfg4n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9200c5f6-46f7-480c-a3a2-d0d9fa3ca5bc,},Annotations:map[string]string{io.kubernetes.container.hash: 18d692d,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c59cdf2dc6a09d9d755a7e29d09869853667e39761314195a525ca06f1dc35,PodSandboxId:fe89ff446540b375ad9a8700890a0ee7abe5caea2c47f19686fac6f8d3d12784,Metadata:&ContainerMetadata{Name:gadget,Attempt:5,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a8279285b9649c62230b8c882ba99e644d3fe8922fb14b53692633322555df8,State:CONTAINER_EXITED,CreatedAt:1716209996981078132,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-4r2zg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 20112e09-b29e-4ddb-96ef-4d06088304a4,},Annotations:map
[string]string{io.kubernetes.container.hash: ce96a3ac,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0affd33442670b381ad0ac8af56bea187d55b8d77ecb6f702f331e6d7cd27a80,PodSandboxId:d99df72c564719e50a49cf57467abeb86c8818263fc64a94a0497147a84174e2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716209931120181872,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kube
rnetes.pod.uid: efadd51f-18e9-48cb-bc58-103881fd9263,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac930cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382f55ae91d16dc6ab148279bf397cd687355480ad2539082316aa7dd601ef94,PodSandboxId:fd3c7dc9776c3eeff41b31b93f2a49af94cd0716954b72ec1f86161902f9ceb9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716209919405672036,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernete
s.pod.name: headlamp-68456f997b-5k6z6,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: c7973bac-822b-4c44-a10c-65bcfdb5f17d,},Annotations:map[string]string{io.kubernetes.container.hash: 98021e3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135a96f190c99f5e260335c022f15f4c6f9bbea70184e9e733be8cff9027ea7c,PodSandboxId:6684887cb09aaf19d350757b7738b384a8240762fa9f58af2e34005a4a40b9b8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNN
ING,CreatedAt:1716209829223861781,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-cjjrn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6847135c-da26-4866-92c6-81b6e53be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: cef7ac2c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:120b275a98f0ebf77b544df46ac2d36888943e39473a2da39018759b246c07df,PodSandboxId:18e6d4697d58e9344944fc76df9d677a3adf550e7afe011bf2ba0b9c5385bf87,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716209810689078572,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xpvg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a441bce-20c7-4f19-b940-4cb826784cea,},Annotations:map[string]string{io.kubernetes.container.hash: 9d906670,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b7f95bfe63215f39e44f4013e882d6e05871185c5bc7ecac61c506f49452d1,PodSandboxId:83f3bee4f32fc75bfa47e3638164f0186d5672c5607b70e50700d0bf0acee69e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c
5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716209809701371022,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lglgg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 901789e2-d702-40f6-a420-a1d24db58a4e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a24b05e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c,PodSandboxId:354aac86fd4a4665bf8afc7fe6bd167e24d314d8f0d13397a2aeebfe19cfd9df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716209801955365316,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-8g977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f766954-b3a4-4592-865f-b37297fefae7,},Annotations:map[string]string{io.kubernetes.container.hash: feff7df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5640739ae135dbb8daed59ce1e4f39a4905cdd7c978310c3867d5cc70d02a655,PodSandboxId:47557659a5b0aed467988a3d8f68654d6895db6c19735e9a9f4609c5d26f8688,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:
a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1716209799507008718,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-hgp7b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 98ccbc95-97f1-48f6-99a4-6c335bd4b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 21e85f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fcce271acb34244cd7f2a77899ca2646459cb60e24733226b032214322c3ba,PodSandboxId:b221456a6d2ca4745db530432ec94e569009cf09fd601a90163006e36d4d4d57,Meta
data:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ff6c6518681d96dab31ba95ce65298861a2e2e2d5b2afbb168e6da22563c13d,State:CONTAINER_RUNNING,CreatedAt:1716209775813947254,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6fcd4f6f98-tzksc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 14c3ddef-1fef-49b7-84cc-6d33520ba034,},Annotations:map[string]string{io.kubernetes.container.hash: 6f3fb5ec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e66ec7f2ae77116414c457358da2b30db464a9cef54c6d94abb3330231e0638,PodSandboxId:345c392f5452dbde2b2794b633b10f9f223744240388bd1a1382a7f432edbde9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209748574401075,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0af02429-e13b-4886-993d-0d7815e2fb69,},Annotations:map[string]string{io.kubernetes.container.hash: a6b6c5f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0,PodSandboxId:79eaca6d020368574c107991ba037d147a5e6759ce595522b190b3208048e31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209745620204527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vp4g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9838e64-b32b-489f-8944-3a29c87892a6,},Annotations:map[string]string{io.kubernetes.container.hash: 49e4769a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":91
53,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c,PodSandboxId:e6b145e6b7a46c742fc5409c92716b9ead3d1a007fe901777100e3e904d089f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209742892346346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpkr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a0dc50-43c6-4927-9c13-45e9104e2206,},Annotations:map[string]string{io.kubernetes.container.hash: e884271,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e,PodSandboxId:a496785b5b5f53d70bf2eae6408569665774f695bf07f2aeec756a75513b9a74,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209724111327099,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc20fa6c7f57dfba2ef2611768216c5c,},Annotations:map[string]string{io.kubernetes.container.hash: e0715a78,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a,PodSandboxId:31de9fbe23d9ba41772d69015e4518cb70c8cdc7386fbac20bac4b64809284c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209724041565547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4bb06d2b47c119024d856c02f66b4d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865,PodSandboxId:ef74fd5cfc67f15c584b795a04fe940f1036e1a37553be890a44cabfd410c9f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209724027576770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb210768643f1d2a3f5e71d39e6100ee,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34,PodSandboxId:d591c03b18dc1366ee18164ecc1c47eeaa8739d08ea95b335348aa29b028f291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209723992074280,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25de9c545ad63a4181a22d9d16ed13c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1f6497a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38dd64bf-4669-4309-a9f2-57a84f3b67b7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d72dc52b7185c       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago        Running             hello-world-app           0                   dbca97dc90a2f       hello-world-app-86c47465fc-cfg4n
	83c59cdf2dc6a       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a            About a minute ago   Exited              gadget                    5                   fe89ff446540b       gadget-4r2zg
	0affd33442670       docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00                              2 minutes ago        Running             nginx                     0                   d99df72c56471       nginx
	382f55ae91d16       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                        2 minutes ago        Running             headlamp                  0                   fd3c7dc9776c3       headlamp-68456f997b-5k6z6
	135a96f190c99       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 4 minutes ago        Running             gcp-auth                  0                   6684887cb09aa       gcp-auth-5db96cd9b4-cjjrn
	120b275a98f0e       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             4 minutes ago        Exited              patch                     1                   18e6d4697d58e       ingress-nginx-admission-patch-xpvg2
	c1b7f95bfe632       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago        Exited              create                    0                   83f3bee4f32fc       ingress-nginx-admission-create-lglgg
	0e9db02ffacd4       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago        Running             metrics-server            0                   354aac86fd4a4       metrics-server-c59844bb4-8g977
	5640739ae135d       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago        Running             yakd                      0                   47557659a5b0a       yakd-dashboard-5ddbf7d777-hgp7b
	78fcce271acb3       gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4               5 minutes ago        Running             cloud-spanner-emulator    0                   b221456a6d2ca       cloud-spanner-emulator-6fcd4f6f98-tzksc
	8e66ec7f2ae77       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago        Running             storage-provisioner       0                   345c392f5452d       storage-provisioner
	7059a82048d9c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago        Running             coredns                   0                   79eaca6d02036       coredns-7db6d8ff4d-vp4g8
	a0af7ffce7a12       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                             5 minutes ago        Running             kube-proxy                0                   e6b145e6b7a46       kube-proxy-mpkr9
	10c3d12060059       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago        Running             etcd                      0                   a496785b5b5f5       etcd-addons-840762
	6363b2ba4829a       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                             5 minutes ago        Running             kube-scheduler            0                   31de9fbe23d9b       kube-scheduler-addons-840762
	6cca9c1fefcd5       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                             5 minutes ago        Running             kube-controller-manager   0                   ef74fd5cfc67f       kube-controller-manager-addons-840762
	9b2ffe0b08efe       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                             5 minutes ago        Running             kube-apiserver            0                   d591c03b18dc1       kube-apiserver-addons-840762
	
	
	==> coredns [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0] <==
	[INFO] 10.244.0.7:36312 - 833 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000133942s
	[INFO] 10.244.0.7:38189 - 20558 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000140292s
	[INFO] 10.244.0.7:38189 - 49744 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00020756s
	[INFO] 10.244.0.7:40716 - 37403 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000209728s
	[INFO] 10.244.0.7:40716 - 54809 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000338691s
	[INFO] 10.244.0.7:34802 - 60141 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076847s
	[INFO] 10.244.0.7:34802 - 13548 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001884167s
	[INFO] 10.244.0.7:46201 - 18591 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000198s
	[INFO] 10.244.0.7:46201 - 17818 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000450713s
	[INFO] 10.244.0.7:44069 - 5855 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000169721s
	[INFO] 10.244.0.7:44069 - 43219 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081091s
	[INFO] 10.244.0.7:48623 - 843 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089929s
	[INFO] 10.244.0.7:48623 - 64597 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000185645s
	[INFO] 10.244.0.7:51149 - 3489 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000100909s
	[INFO] 10.244.0.7:51149 - 15454 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000071835s
	[INFO] 10.244.0.22:56551 - 48499 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000373719s
	[INFO] 10.244.0.22:40318 - 16711 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000108319s
	[INFO] 10.244.0.22:39466 - 14127 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116621s
	[INFO] 10.244.0.22:54206 - 13934 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000058539s
	[INFO] 10.244.0.22:56712 - 54214 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114656s
	[INFO] 10.244.0.22:56107 - 36752 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091644s
	[INFO] 10.244.0.22:46924 - 25436 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001332083s
	[INFO] 10.244.0.22:54686 - 62944 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001589718s
	[INFO] 10.244.0.24:57177 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000620091s
	[INFO] 10.244.0.24:37965 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000088343s
	
	
	==> describe nodes <==
	Name:               addons-840762
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-840762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=addons-840762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T12_55_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-840762
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:55:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-840762
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:01:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:59:04 +0000   Mon, 20 May 2024 12:55:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:59:04 +0000   Mon, 20 May 2024 12:55:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:59:04 +0000   Mon, 20 May 2024 12:55:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:59:04 +0000   Mon, 20 May 2024 12:55:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    addons-840762
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 0bc07a572c69424e8b07c61391a8d459
	  System UUID:                0bc07a57-2c69-424e-8b07-c61391a8d459
	  Boot ID:                    1b84f601-3379-4074-9d98-222bacd601d5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-tzksc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	  default                     hello-world-app-86c47465fc-cfg4n           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gadget                      gadget-4r2zg                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  gcp-auth                    gcp-auth-5db96cd9b4-cjjrn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  headlamp                    headlamp-68456f997b-5k6z6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-7db6d8ff4d-vp4g8                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m38s
	  kube-system                 etcd-addons-840762                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m51s
	  kube-system                 kube-apiserver-addons-840762               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	  kube-system                 kube-controller-manager-addons-840762      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 kube-proxy-mpkr9                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	  kube-system                 kube-scheduler-addons-840762               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                 metrics-server-c59844bb4-8g977             100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m32s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-hgp7b            0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m36s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m57s (x8 over 5m58s)  kubelet          Node addons-840762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m57s (x8 over 5m58s)  kubelet          Node addons-840762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m57s (x7 over 5m58s)  kubelet          Node addons-840762 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m51s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m51s                  kubelet          Node addons-840762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m51s                  kubelet          Node addons-840762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m51s                  kubelet          Node addons-840762 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m50s                  kubelet          Node addons-840762 status is now: NodeReady
	  Normal  RegisteredNode           5m39s                  node-controller  Node addons-840762 event: Registered Node addons-840762 in Controller
	
	
	==> dmesg <==
	[  +5.317555] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.571836] kauditd_printk_skb: 66 callbacks suppressed
	[May20 12:56] kauditd_printk_skb: 29 callbacks suppressed
	[ +12.326752] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.228650] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.092557] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.597356] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.620848] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.044087] kauditd_printk_skb: 61 callbacks suppressed
	[May20 12:57] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.406626] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.331220] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.843229] kauditd_printk_skb: 37 callbacks suppressed
	[May20 12:58] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.616141] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.048897] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.235566] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.100710] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.393916] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.691140] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.455540] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.268416] kauditd_printk_skb: 4 callbacks suppressed
	[May20 12:59] kauditd_printk_skb: 15 callbacks suppressed
	[May20 13:01] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.742137] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e] <==
	{"level":"info","ts":"2024-05-20T12:57:02.321521Z","caller":"traceutil/trace.go:171","msg":"trace[2119614539] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1143; }","duration":"213.7996ms","start":"2024-05-20T12:57:02.107675Z","end":"2024-05-20T12:57:02.321474Z","steps":["trace[2119614539] 'agreement among raft nodes before linearized reading'  (duration: 212.506688ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:57:02.320376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.90392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-8g977\" ","response":"range_response_count:1 size:4456"}
	{"level":"info","ts":"2024-05-20T12:57:02.322075Z","caller":"traceutil/trace.go:171","msg":"trace[402781208] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-8g977; range_end:; response_count:1; response_revision:1143; }","duration":"229.620558ms","start":"2024-05-20T12:57:02.092443Z","end":"2024-05-20T12:57:02.322064Z","steps":["trace[402781208] 'agreement among raft nodes before linearized reading'  (duration: 227.894066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:57:02.320427Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.989566ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-05-20T12:57:02.322823Z","caller":"traceutil/trace.go:171","msg":"trace[1202776357] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1143; }","duration":"276.407737ms","start":"2024-05-20T12:57:02.046401Z","end":"2024-05-20T12:57:02.322809Z","steps":["trace[1202776357] 'agreement among raft nodes before linearized reading'  (duration: 273.988404ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:57:09.105325Z","caller":"traceutil/trace.go:171","msg":"trace[1398827240] transaction","detail":"{read_only:false; response_revision:1175; number_of_response:1; }","duration":"316.084124ms","start":"2024-05-20T12:57:08.789227Z","end":"2024-05-20T12:57:09.105311Z","steps":["trace[1398827240] 'process raft request'  (duration: 315.952768ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:57:09.105525Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:57:08.789209Z","time spent":"316.215355ms","remote":"127.0.0.1:53244","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-q7ycbuju4r3pza3uylsengkn3e\" mod_revision:1128 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-q7ycbuju4r3pza3uylsengkn3e\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-q7ycbuju4r3pza3uylsengkn3e\" > >"}
	{"level":"info","ts":"2024-05-20T12:57:09.106442Z","caller":"traceutil/trace.go:171","msg":"trace[326326687] linearizableReadLoop","detail":"{readStateIndex:1213; appliedIndex:1213; }","duration":"209.919064ms","start":"2024-05-20T12:57:08.89651Z","end":"2024-05-20T12:57:09.106429Z","steps":["trace[326326687] 'read index received'  (duration: 209.914275ms)","trace[326326687] 'applied index is now lower than readState.Index'  (duration: 4.136µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T12:57:09.106829Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.885262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11447"}
	{"level":"info","ts":"2024-05-20T12:57:09.106896Z","caller":"traceutil/trace.go:171","msg":"trace[1504113938] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1175; }","duration":"144.001652ms","start":"2024-05-20T12:57:08.962878Z","end":"2024-05-20T12:57:09.10688Z","steps":["trace[1504113938] 'agreement among raft nodes before linearized reading'  (duration: 143.810281ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:57:09.107098Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.589083ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-8g977.17d133b933159048\" ","response":"range_response_count:1 size:813"}
	{"level":"info","ts":"2024-05-20T12:57:09.107231Z","caller":"traceutil/trace.go:171","msg":"trace[2064512433] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-c59844bb4-8g977.17d133b933159048; range_end:; response_count:1; response_revision:1175; }","duration":"210.737054ms","start":"2024-05-20T12:57:08.896486Z","end":"2024-05-20T12:57:09.107223Z","steps":["trace[2064512433] 'agreement among raft nodes before linearized reading'  (duration: 210.56075ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:57:45.751353Z","caller":"traceutil/trace.go:171","msg":"trace[882681731] linearizableReadLoop","detail":"{readStateIndex:1334; appliedIndex:1333; }","duration":"159.848303ms","start":"2024-05-20T12:57:45.591489Z","end":"2024-05-20T12:57:45.751337Z","steps":["trace[882681731] 'read index received'  (duration: 159.544489ms)","trace[882681731] 'applied index is now lower than readState.Index'  (duration: 303.24µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T12:57:45.751582Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.05588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-8g977\" ","response":"range_response_count:1 size:4456"}
	{"level":"info","ts":"2024-05-20T12:57:45.751624Z","caller":"traceutil/trace.go:171","msg":"trace[1190867087] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-8g977; range_end:; response_count:1; response_revision:1286; }","duration":"160.147841ms","start":"2024-05-20T12:57:45.591463Z","end":"2024-05-20T12:57:45.751611Z","steps":["trace[1190867087] 'agreement among raft nodes before linearized reading'  (duration: 159.984942ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:57:45.751907Z","caller":"traceutil/trace.go:171","msg":"trace[346556204] transaction","detail":"{read_only:false; response_revision:1286; number_of_response:1; }","duration":"270.337258ms","start":"2024-05-20T12:57:45.481561Z","end":"2024-05-20T12:57:45.751899Z","steps":["trace[346556204] 'process raft request'  (duration: 269.51066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:58:18.70607Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:58:18.237789Z","time spent":"468.269671ms","remote":"127.0.0.1:52996","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-05-20T12:58:18.706201Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"329.609863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:58:18.70625Z","caller":"traceutil/trace.go:171","msg":"trace[779024922] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1361; }","duration":"329.774357ms","start":"2024-05-20T12:58:18.376465Z","end":"2024-05-20T12:58:18.706239Z","steps":["trace[779024922] 'agreement among raft nodes before linearized reading'  (duration: 329.619372ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:58:18.706322Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:58:18.376449Z","time spent":"329.864586ms","remote":"127.0.0.1:52954","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-05-20T12:58:18.70606Z","caller":"traceutil/trace.go:171","msg":"trace[1211898191] linearizableReadLoop","detail":"{readStateIndex:1416; appliedIndex:1415; }","duration":"329.524623ms","start":"2024-05-20T12:58:18.3765Z","end":"2024-05-20T12:58:18.706024Z","steps":["trace[1211898191] 'read index received'  (duration: 329.28852ms)","trace[1211898191] 'applied index is now lower than readState.Index'  (duration: 235.026µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T12:58:18.706606Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.994414ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85719"}
	{"level":"info","ts":"2024-05-20T12:58:18.706632Z","caller":"traceutil/trace.go:171","msg":"trace[1149837529] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1361; }","duration":"145.072721ms","start":"2024-05-20T12:58:18.561551Z","end":"2024-05-20T12:58:18.706624Z","steps":["trace[1149837529] 'agreement among raft nodes before linearized reading'  (duration: 144.887626ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:58:37.684314Z","caller":"traceutil/trace.go:171","msg":"trace[809617530] transaction","detail":"{read_only:false; response_revision:1555; number_of_response:1; }","duration":"138.77512ms","start":"2024-05-20T12:58:37.545506Z","end":"2024-05-20T12:58:37.684281Z","steps":["trace[809617530] 'process raft request'  (duration: 138.684656ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:59:26.543828Z","caller":"traceutil/trace.go:171","msg":"trace[1911464398] transaction","detail":"{read_only:false; response_revision:1806; number_of_response:1; }","duration":"109.341933ms","start":"2024-05-20T12:59:26.434453Z","end":"2024-05-20T12:59:26.543795Z","steps":["trace[1911464398] 'process raft request'  (duration: 109.039965ms)"],"step_count":1}
	
	
	==> gcp-auth [135a96f190c99f5e260335c022f15f4c6f9bbea70184e9e733be8cff9027ea7c] <==
	2024/05/20 12:57:09 GCP Auth Webhook started!
	2024/05/20 12:58:12 Ready to marshal response ...
	2024/05/20 12:58:12 Ready to write response ...
	2024/05/20 12:58:12 Ready to marshal response ...
	2024/05/20 12:58:12 Ready to write response ...
	2024/05/20 12:58:19 Ready to marshal response ...
	2024/05/20 12:58:19 Ready to write response ...
	2024/05/20 12:58:19 Ready to marshal response ...
	2024/05/20 12:58:19 Ready to write response ...
	2024/05/20 12:58:32 Ready to marshal response ...
	2024/05/20 12:58:32 Ready to write response ...
	2024/05/20 12:58:32 Ready to marshal response ...
	2024/05/20 12:58:32 Ready to write response ...
	2024/05/20 12:58:33 Ready to marshal response ...
	2024/05/20 12:58:33 Ready to write response ...
	2024/05/20 12:58:33 Ready to marshal response ...
	2024/05/20 12:58:33 Ready to write response ...
	2024/05/20 12:58:33 Ready to marshal response ...
	2024/05/20 12:58:33 Ready to write response ...
	2024/05/20 12:58:46 Ready to marshal response ...
	2024/05/20 12:58:46 Ready to write response ...
	2024/05/20 12:58:53 Ready to marshal response ...
	2024/05/20 12:58:53 Ready to write response ...
	2024/05/20 13:01:09 Ready to marshal response ...
	2024/05/20 13:01:09 Ready to write response ...
	
	
	==> kernel <==
	 13:01:20 up 6 min,  0 users,  load average: 0.31, 0.94, 0.56
	Linux addons-840762 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34] <==
	E0520 12:57:48.934987       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0520 12:57:48.934742       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.14.240:443: connect: connection refused
	E0520 12:57:48.937295       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.14.240:443: connect: connection refused
	E0520 12:57:48.944679       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.14.240:443: connect: connection refused
	I0520 12:57:49.061190       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0520 12:58:27.221268       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0520 12:58:33.015819       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.209.58"}
	I0520 12:58:46.386477       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0520 12:58:46.570418       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.163.48"}
	I0520 12:58:48.529971       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 12:58:48.530022       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 12:58:48.555537       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 12:58:48.555695       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 12:58:48.587008       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 12:58:48.587056       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 12:58:48.602159       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 12:58:48.602204       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 12:58:48.604193       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 12:58:48.604228       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0520 12:58:49.587843       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0520 12:58:49.605168       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E0520 12:58:49.629588       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	W0520 12:58:49.633268       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0520 13:01:09.166329       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.226.165"}
	
	
	==> kube-controller-manager [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865] <==
	W0520 12:59:34.326270       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 12:59:34.326361       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 12:59:55.384058       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 12:59:55.384321       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 12:59:59.384177       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 12:59:59.384279       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 13:00:23.439342       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 13:00:23.439409       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 13:00:31.818573       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 13:00:31.818745       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 13:00:49.749362       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 13:00:49.749493       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 13:01:02.301339       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 13:01:02.301449       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0520 13:01:09.018880       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="45.686117ms"
	I0520 13:01:09.061383       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="42.44796ms"
	I0520 13:01:09.074916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="13.481525ms"
	I0520 13:01:09.074996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="41.454µs"
	I0520 13:01:11.819529       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0520 13:01:11.832284       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="6.993µs"
	I0520 13:01:11.848055       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0520 13:01:12.820272       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="11.684516ms"
	I0520 13:01:12.821041       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="26.441µs"
	W0520 13:01:19.344715       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 13:01:19.344892       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c] <==
	I0520 12:55:43.546950       1 server_linux.go:69] "Using iptables proxy"
	I0520 12:55:43.566793       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.19"]
	I0520 12:55:43.676877       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 12:55:43.676950       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 12:55:43.676967       1 server_linux.go:165] "Using iptables Proxier"
	I0520 12:55:43.680164       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 12:55:43.680354       1 server.go:872] "Version info" version="v1.30.1"
	I0520 12:55:43.680369       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:55:43.683205       1 config.go:192] "Starting service config controller"
	I0520 12:55:43.683236       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 12:55:43.683271       1 config.go:101] "Starting endpoint slice config controller"
	I0520 12:55:43.683275       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 12:55:43.683845       1 config.go:319] "Starting node config controller"
	I0520 12:55:43.683852       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 12:55:43.783393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 12:55:43.783421       1 shared_informer.go:320] Caches are synced for service config
	I0520 12:55:43.784288       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a] <==
	W0520 12:55:26.558819       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 12:55:26.558844       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 12:55:26.558898       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 12:55:26.558919       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 12:55:26.559021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 12:55:26.559083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 12:55:27.360805       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 12:55:27.360863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 12:55:27.410660       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 12:55:27.410724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 12:55:27.421620       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 12:55:27.421692       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 12:55:27.595975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 12:55:27.596024       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 12:55:27.615749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 12:55:27.615778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 12:55:27.672874       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 12:55:27.672999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 12:55:27.707891       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 12:55:27.707934       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 12:55:27.803616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 12:55:27.803709       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 12:55:27.813500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 12:55:27.813540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0520 12:55:30.530739       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 13:01:09 addons-840762 kubelet[1276]: I0520 13:01:09.125000    1276 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqvct\" (UniqueName: \"kubernetes.io/projected/9200c5f6-46f7-480c-a3a2-d0d9fa3ca5bc-kube-api-access-hqvct\") pod \"hello-world-app-86c47465fc-cfg4n\" (UID: \"9200c5f6-46f7-480c-a3a2-d0d9fa3ca5bc\") " pod="default/hello-world-app-86c47465fc-cfg4n"
	May 20 13:01:09 addons-840762 kubelet[1276]: I0520 13:01:09.125062    1276 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9200c5f6-46f7-480c-a3a2-d0d9fa3ca5bc-gcp-creds\") pod \"hello-world-app-86c47465fc-cfg4n\" (UID: \"9200c5f6-46f7-480c-a3a2-d0d9fa3ca5bc\") " pod="default/hello-world-app-86c47465fc-cfg4n"
	May 20 13:01:10 addons-840762 kubelet[1276]: I0520 13:01:10.234403    1276 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wxxj\" (UniqueName: \"kubernetes.io/projected/c057ec77-ddf8-4ad7-9001-a7b4f48a2d00-kube-api-access-5wxxj\") pod \"c057ec77-ddf8-4ad7-9001-a7b4f48a2d00\" (UID: \"c057ec77-ddf8-4ad7-9001-a7b4f48a2d00\") "
	May 20 13:01:10 addons-840762 kubelet[1276]: I0520 13:01:10.237055    1276 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c057ec77-ddf8-4ad7-9001-a7b4f48a2d00-kube-api-access-5wxxj" (OuterVolumeSpecName: "kube-api-access-5wxxj") pod "c057ec77-ddf8-4ad7-9001-a7b4f48a2d00" (UID: "c057ec77-ddf8-4ad7-9001-a7b4f48a2d00"). InnerVolumeSpecName "kube-api-access-5wxxj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 20 13:01:10 addons-840762 kubelet[1276]: I0520 13:01:10.335037    1276 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5wxxj\" (UniqueName: \"kubernetes.io/projected/c057ec77-ddf8-4ad7-9001-a7b4f48a2d00-kube-api-access-5wxxj\") on node \"addons-840762\" DevicePath \"\""
	May 20 13:01:10 addons-840762 kubelet[1276]: I0520 13:01:10.783990    1276 scope.go:117] "RemoveContainer" containerID="d486f81c5c6a791dc681203b9599c0c88d2b09dba9529ee98c1844379956549f"
	May 20 13:01:10 addons-840762 kubelet[1276]: I0520 13:01:10.844602    1276 scope.go:117] "RemoveContainer" containerID="d486f81c5c6a791dc681203b9599c0c88d2b09dba9529ee98c1844379956549f"
	May 20 13:01:10 addons-840762 kubelet[1276]: E0520 13:01:10.845619    1276 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d486f81c5c6a791dc681203b9599c0c88d2b09dba9529ee98c1844379956549f\": container with ID starting with d486f81c5c6a791dc681203b9599c0c88d2b09dba9529ee98c1844379956549f not found: ID does not exist" containerID="d486f81c5c6a791dc681203b9599c0c88d2b09dba9529ee98c1844379956549f"
	May 20 13:01:10 addons-840762 kubelet[1276]: I0520 13:01:10.845662    1276 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d486f81c5c6a791dc681203b9599c0c88d2b09dba9529ee98c1844379956549f"} err="failed to get container status \"d486f81c5c6a791dc681203b9599c0c88d2b09dba9529ee98c1844379956549f\": rpc error: code = NotFound desc = could not find container \"d486f81c5c6a791dc681203b9599c0c88d2b09dba9529ee98c1844379956549f\": container with ID starting with d486f81c5c6a791dc681203b9599c0c88d2b09dba9529ee98c1844379956549f not found: ID does not exist"
	May 20 13:01:11 addons-840762 kubelet[1276]: I0520 13:01:11.419699    1276 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c057ec77-ddf8-4ad7-9001-a7b4f48a2d00" path="/var/lib/kubelet/pods/c057ec77-ddf8-4ad7-9001-a7b4f48a2d00/volumes"
	May 20 13:01:13 addons-840762 kubelet[1276]: I0520 13:01:13.391747    1276 scope.go:117] "RemoveContainer" containerID="83c59cdf2dc6a09d9d755a7e29d09869853667e39761314195a525ca06f1dc35"
	May 20 13:01:13 addons-840762 kubelet[1276]: E0520 13:01:13.392187    1276 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-4r2zg_gadget(20112e09-b29e-4ddb-96ef-4d06088304a4)\"" pod="gadget/gadget-4r2zg" podUID="20112e09-b29e-4ddb-96ef-4d06088304a4"
	May 20 13:01:13 addons-840762 kubelet[1276]: I0520 13:01:13.396695    1276 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a441bce-20c7-4f19-b940-4cb826784cea" path="/var/lib/kubelet/pods/7a441bce-20c7-4f19-b940-4cb826784cea/volumes"
	May 20 13:01:13 addons-840762 kubelet[1276]: I0520 13:01:13.397193    1276 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="901789e2-d702-40f6-a420-a1d24db58a4e" path="/var/lib/kubelet/pods/901789e2-d702-40f6-a420-a1d24db58a4e/volumes"
	May 20 13:01:15 addons-840762 kubelet[1276]: I0520 13:01:15.176361    1276 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2prtl\" (UniqueName: \"kubernetes.io/projected/156342ec-3be6-4be5-9629-f89ca1ee418b-kube-api-access-2prtl\") pod \"156342ec-3be6-4be5-9629-f89ca1ee418b\" (UID: \"156342ec-3be6-4be5-9629-f89ca1ee418b\") "
	May 20 13:01:15 addons-840762 kubelet[1276]: I0520 13:01:15.176420    1276 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/156342ec-3be6-4be5-9629-f89ca1ee418b-webhook-cert\") pod \"156342ec-3be6-4be5-9629-f89ca1ee418b\" (UID: \"156342ec-3be6-4be5-9629-f89ca1ee418b\") "
	May 20 13:01:15 addons-840762 kubelet[1276]: I0520 13:01:15.178713    1276 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156342ec-3be6-4be5-9629-f89ca1ee418b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "156342ec-3be6-4be5-9629-f89ca1ee418b" (UID: "156342ec-3be6-4be5-9629-f89ca1ee418b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 20 13:01:15 addons-840762 kubelet[1276]: I0520 13:01:15.179094    1276 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/156342ec-3be6-4be5-9629-f89ca1ee418b-kube-api-access-2prtl" (OuterVolumeSpecName: "kube-api-access-2prtl") pod "156342ec-3be6-4be5-9629-f89ca1ee418b" (UID: "156342ec-3be6-4be5-9629-f89ca1ee418b"). InnerVolumeSpecName "kube-api-access-2prtl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 20 13:01:15 addons-840762 kubelet[1276]: I0520 13:01:15.277626    1276 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/156342ec-3be6-4be5-9629-f89ca1ee418b-webhook-cert\") on node \"addons-840762\" DevicePath \"\""
	May 20 13:01:15 addons-840762 kubelet[1276]: I0520 13:01:15.277660    1276 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2prtl\" (UniqueName: \"kubernetes.io/projected/156342ec-3be6-4be5-9629-f89ca1ee418b-kube-api-access-2prtl\") on node \"addons-840762\" DevicePath \"\""
	May 20 13:01:15 addons-840762 kubelet[1276]: I0520 13:01:15.395017    1276 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="156342ec-3be6-4be5-9629-f89ca1ee418b" path="/var/lib/kubelet/pods/156342ec-3be6-4be5-9629-f89ca1ee418b/volumes"
	May 20 13:01:15 addons-840762 kubelet[1276]: I0520 13:01:15.809091    1276 scope.go:117] "RemoveContainer" containerID="49ea9ad625840decc8395ef9b988e2680113a729cdaa8419dbe5270a6ad23cc0"
	May 20 13:01:15 addons-840762 kubelet[1276]: I0520 13:01:15.823799    1276 scope.go:117] "RemoveContainer" containerID="49ea9ad625840decc8395ef9b988e2680113a729cdaa8419dbe5270a6ad23cc0"
	May 20 13:01:15 addons-840762 kubelet[1276]: E0520 13:01:15.825073    1276 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49ea9ad625840decc8395ef9b988e2680113a729cdaa8419dbe5270a6ad23cc0\": container with ID starting with 49ea9ad625840decc8395ef9b988e2680113a729cdaa8419dbe5270a6ad23cc0 not found: ID does not exist" containerID="49ea9ad625840decc8395ef9b988e2680113a729cdaa8419dbe5270a6ad23cc0"
	May 20 13:01:15 addons-840762 kubelet[1276]: I0520 13:01:15.825207    1276 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49ea9ad625840decc8395ef9b988e2680113a729cdaa8419dbe5270a6ad23cc0"} err="failed to get container status \"49ea9ad625840decc8395ef9b988e2680113a729cdaa8419dbe5270a6ad23cc0\": rpc error: code = NotFound desc = could not find container \"49ea9ad625840decc8395ef9b988e2680113a729cdaa8419dbe5270a6ad23cc0\": container with ID starting with 49ea9ad625840decc8395ef9b988e2680113a729cdaa8419dbe5270a6ad23cc0 not found: ID does not exist"
	
	
	==> storage-provisioner [8e66ec7f2ae77116414c457358da2b30db464a9cef54c6d94abb3330231e0638] <==
	I0520 12:55:49.365029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 12:55:49.426698       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 12:55:49.426754       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 12:55:49.559886       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 12:55:49.560949       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc1e5491-e0b6-4a74-9796-3c1c2ff6413c", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-840762_c6e61084-5a81-4fcf-a1aa-29d22ee4d88e became leader
	I0520 12:55:49.567383       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-840762_c6e61084-5a81-4fcf-a1aa-29d22ee4d88e!
	I0520 12:55:49.775880       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-840762_c6e61084-5a81-4fcf-a1aa-29d22ee4d88e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-840762 -n addons-840762
helpers_test.go:261: (dbg) Run:  kubectl --context addons-840762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.86s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (334.56s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 11.813832ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-8g977" [2f766954-b3a4-4592-865f-b37297fefae7] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006254802s
addons_test.go:415: (dbg) Run:  kubectl --context addons-840762 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-840762 top pods -n kube-system: exit status 1 (89.721453ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vp4g8, age: 2m25.913200843s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-840762 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-840762 top pods -n kube-system: exit status 1 (68.177282ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vp4g8, age: 2m28.278696316s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-840762 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-840762 top pods -n kube-system: exit status 1 (68.638884ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vp4g8, age: 2m31.889726791s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-840762 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-840762 top pods -n kube-system: exit status 1 (221.979876ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vp4g8, age: 2m36.739120158s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-840762 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-840762 top pods -n kube-system: exit status 1 (66.988245ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vp4g8, age: 2m46.979461109s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-840762 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-840762 top pods -n kube-system: exit status 1 (67.941368ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vp4g8, age: 3m3.225668154s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-840762 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-840762 top pods -n kube-system: exit status 1 (65.881205ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vp4g8, age: 3m35.845864988s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-840762 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-840762 top pods -n kube-system: exit status 1 (69.374943ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vp4g8, age: 4m6.530406873s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-840762 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-840762 top pods -n kube-system: exit status 1 (65.564704ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vp4g8, age: 5m15.887518629s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-840762 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-840762 top pods -n kube-system: exit status 1 (62.579666ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vp4g8, age: 6m2.573776028s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-840762 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-840762 top pods -n kube-system: exit status 1 (64.865769ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vp4g8, age: 6m42.063526938s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-840762 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-840762 top pods -n kube-system: exit status 1 (69.750319ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vp4g8, age: 7m51.468859877s

                                                
                                                
** /stderr **
addons_test.go:429: failed checking metric server: exit status 1
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-840762 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-840762 -n addons-840762
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-840762 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-840762 logs -n 25: (1.445354514s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| delete  | -p download-only-600768                                                                     | download-only-600768 | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| delete  | -p download-only-562366                                                                     | download-only-562366 | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| delete  | -p download-only-600768                                                                     | download-only-600768 | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-910817 | jenkins | v1.33.1 | 20 May 24 12:54 UTC |                     |
	|         | binary-mirror-910817                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44813                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-910817                                                                     | binary-mirror-910817 | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| addons  | enable dashboard -p                                                                         | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:54 UTC |                     |
	|         | addons-840762                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:54 UTC |                     |
	|         | addons-840762                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-840762 --wait=true                                                                | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:58 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | addons-840762                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | -p addons-840762                                                                            |                      |         |         |                     |                     |
	| ip      | addons-840762 ip                                                                            | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	| addons  | addons-840762 addons disable                                                                | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC |                     |
	|         | addons-840762                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | -p addons-840762                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-840762 ssh cat                                                                       | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | /opt/local-path-provisioner/pvc-ef6f8a93-1567-44f6-8095-fb964ae1388e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-840762 addons disable                                                                | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-840762 addons                                                                        | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-840762 addons                                                                        | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-840762 ssh curl -s                                                                   | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-840762 addons disable                                                                | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-840762 ip                                                                            | addons-840762        | jenkins | v1.33.1 | 20 May 24 13:01 UTC | 20 May 24 13:01 UTC |
	| addons  | addons-840762 addons disable                                                                | addons-840762        | jenkins | v1.33.1 | 20 May 24 13:01 UTC | 20 May 24 13:01 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-840762 addons disable                                                                | addons-840762        | jenkins | v1.33.1 | 20 May 24 13:01 UTC | 20 May 24 13:01 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-840762 addons                                                                        | addons-840762        | jenkins | v1.33.1 | 20 May 24 13:03 UTC | 20 May 24 13:03 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 12:54:50
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 12:54:50.749933  610501 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:54:50.750199  610501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:54:50.750209  610501 out.go:304] Setting ErrFile to fd 2...
	I0520 12:54:50.750213  610501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:54:50.750399  610501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 12:54:50.750992  610501 out.go:298] Setting JSON to false
	I0520 12:54:50.751872  610501 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9431,"bootTime":1716200260,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 12:54:50.751931  610501 start.go:139] virtualization: kvm guest
	I0520 12:54:50.754672  610501 out.go:177] * [addons-840762] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 12:54:50.756981  610501 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 12:54:50.759177  610501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 12:54:50.756934  610501 notify.go:220] Checking for updates...
	I0520 12:54:50.761478  610501 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 12:54:50.763622  610501 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 12:54:50.765719  610501 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 12:54:50.767722  610501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 12:54:50.769950  610501 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 12:54:50.803102  610501 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 12:54:50.805399  610501 start.go:297] selected driver: kvm2
	I0520 12:54:50.805434  610501 start.go:901] validating driver "kvm2" against <nil>
	I0520 12:54:50.805454  610501 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 12:54:50.806441  610501 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:54:50.806556  610501 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 12:54:50.822923  610501 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 12:54:50.822988  610501 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 12:54:50.823216  610501 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:54:50.823247  610501 cni.go:84] Creating CNI manager for ""
	I0520 12:54:50.823257  610501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 12:54:50.823270  610501 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 12:54:50.823335  610501 start.go:340] cluster config:
	{Name:addons-840762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-840762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:54:50.823464  610501 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:54:50.827202  610501 out.go:177] * Starting "addons-840762" primary control-plane node in "addons-840762" cluster
	I0520 12:54:50.829149  610501 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:54:50.829183  610501 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 12:54:50.829194  610501 cache.go:56] Caching tarball of preloaded images
	I0520 12:54:50.829274  610501 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 12:54:50.829286  610501 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 12:54:50.829591  610501 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/config.json ...
	I0520 12:54:50.829616  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/config.json: {Name:mk1bcc97b7c3196011ae8aa65e58032d87fa57bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:54:50.829771  610501 start.go:360] acquireMachinesLock for addons-840762: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 12:54:50.829815  610501 start.go:364] duration metric: took 31.227µs to acquireMachinesLock for "addons-840762"
	I0520 12:54:50.829832  610501 start.go:93] Provisioning new machine with config: &{Name:addons-840762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-840762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:54:50.829901  610501 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 12:54:50.832368  610501 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0520 12:54:50.832505  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:54:50.832552  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:54:50.847327  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I0520 12:54:50.847765  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:54:50.848420  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:54:50.848446  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:54:50.848806  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:54:50.849047  610501 main.go:141] libmachine: (addons-840762) Calling .GetMachineName
	I0520 12:54:50.849193  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:54:50.849375  610501 start.go:159] libmachine.API.Create for "addons-840762" (driver="kvm2")
	I0520 12:54:50.849403  610501 client.go:168] LocalClient.Create starting
	I0520 12:54:50.849451  610501 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem
	I0520 12:54:50.991473  610501 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem
	I0520 12:54:51.176622  610501 main.go:141] libmachine: Running pre-create checks...
	I0520 12:54:51.176652  610501 main.go:141] libmachine: (addons-840762) Calling .PreCreateCheck
	I0520 12:54:51.177212  610501 main.go:141] libmachine: (addons-840762) Calling .GetConfigRaw
	I0520 12:54:51.177703  610501 main.go:141] libmachine: Creating machine...
	I0520 12:54:51.177718  610501 main.go:141] libmachine: (addons-840762) Calling .Create
	I0520 12:54:51.177909  610501 main.go:141] libmachine: (addons-840762) Creating KVM machine...
	I0520 12:54:51.179266  610501 main.go:141] libmachine: (addons-840762) DBG | found existing default KVM network
	I0520 12:54:51.180081  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:51.179921  610539 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c10}
	I0520 12:54:51.180138  610501 main.go:141] libmachine: (addons-840762) DBG | created network xml: 
	I0520 12:54:51.180166  610501 main.go:141] libmachine: (addons-840762) DBG | <network>
	I0520 12:54:51.180178  610501 main.go:141] libmachine: (addons-840762) DBG |   <name>mk-addons-840762</name>
	I0520 12:54:51.180193  610501 main.go:141] libmachine: (addons-840762) DBG |   <dns enable='no'/>
	I0520 12:54:51.180204  610501 main.go:141] libmachine: (addons-840762) DBG |   
	I0520 12:54:51.180218  610501 main.go:141] libmachine: (addons-840762) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 12:54:51.180227  610501 main.go:141] libmachine: (addons-840762) DBG |     <dhcp>
	I0520 12:54:51.180235  610501 main.go:141] libmachine: (addons-840762) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 12:54:51.180247  610501 main.go:141] libmachine: (addons-840762) DBG |     </dhcp>
	I0520 12:54:51.180255  610501 main.go:141] libmachine: (addons-840762) DBG |   </ip>
	I0520 12:54:51.180318  610501 main.go:141] libmachine: (addons-840762) DBG |   
	I0520 12:54:51.180349  610501 main.go:141] libmachine: (addons-840762) DBG | </network>
	I0520 12:54:51.180368  610501 main.go:141] libmachine: (addons-840762) DBG | 
	I0520 12:54:51.186377  610501 main.go:141] libmachine: (addons-840762) DBG | trying to create private KVM network mk-addons-840762 192.168.39.0/24...
	I0520 12:54:51.253528  610501 main.go:141] libmachine: (addons-840762) DBG | private KVM network mk-addons-840762 192.168.39.0/24 created
	I0520 12:54:51.253564  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:51.253446  610539 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 12:54:51.253577  610501 main.go:141] libmachine: (addons-840762) Setting up store path in /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762 ...
	I0520 12:54:51.253591  610501 main.go:141] libmachine: (addons-840762) Building disk image from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 12:54:51.253664  610501 main.go:141] libmachine: (addons-840762) Downloading /home/jenkins/minikube-integration/18929-602525/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 12:54:51.515102  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:51.514941  610539 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa...
	I0520 12:54:51.762036  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:51.761845  610539 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/addons-840762.rawdisk...
	I0520 12:54:51.762086  610501 main.go:141] libmachine: (addons-840762) DBG | Writing magic tar header
	I0520 12:54:51.762101  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762 (perms=drwx------)
	I0520 12:54:51.762118  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines (perms=drwxr-xr-x)
	I0520 12:54:51.762125  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube (perms=drwxr-xr-x)
	I0520 12:54:51.762131  610501 main.go:141] libmachine: (addons-840762) DBG | Writing SSH key tar header
	I0520 12:54:51.762141  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:51.761967  610539 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762 ...
	I0520 12:54:51.762151  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525 (perms=drwxrwxr-x)
	I0520 12:54:51.762163  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762
	I0520 12:54:51.762179  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 12:54:51.762201  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 12:54:51.762212  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines
	I0520 12:54:51.762223  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 12:54:51.762236  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525
	I0520 12:54:51.762248  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 12:54:51.762255  610501 main.go:141] libmachine: (addons-840762) Creating domain...
	I0520 12:54:51.762264  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins
	I0520 12:54:51.762277  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home
	I0520 12:54:51.762293  610501 main.go:141] libmachine: (addons-840762) DBG | Skipping /home - not owner
	I0520 12:54:51.763533  610501 main.go:141] libmachine: (addons-840762) define libvirt domain using xml: 
	I0520 12:54:51.763552  610501 main.go:141] libmachine: (addons-840762) <domain type='kvm'>
	I0520 12:54:51.763560  610501 main.go:141] libmachine: (addons-840762)   <name>addons-840762</name>
	I0520 12:54:51.763565  610501 main.go:141] libmachine: (addons-840762)   <memory unit='MiB'>4000</memory>
	I0520 12:54:51.763570  610501 main.go:141] libmachine: (addons-840762)   <vcpu>2</vcpu>
	I0520 12:54:51.763574  610501 main.go:141] libmachine: (addons-840762)   <features>
	I0520 12:54:51.763580  610501 main.go:141] libmachine: (addons-840762)     <acpi/>
	I0520 12:54:51.763586  610501 main.go:141] libmachine: (addons-840762)     <apic/>
	I0520 12:54:51.763593  610501 main.go:141] libmachine: (addons-840762)     <pae/>
	I0520 12:54:51.763604  610501 main.go:141] libmachine: (addons-840762)     
	I0520 12:54:51.763612  610501 main.go:141] libmachine: (addons-840762)   </features>
	I0520 12:54:51.763623  610501 main.go:141] libmachine: (addons-840762)   <cpu mode='host-passthrough'>
	I0520 12:54:51.763629  610501 main.go:141] libmachine: (addons-840762)   
	I0520 12:54:51.763646  610501 main.go:141] libmachine: (addons-840762)   </cpu>
	I0520 12:54:51.763655  610501 main.go:141] libmachine: (addons-840762)   <os>
	I0520 12:54:51.763660  610501 main.go:141] libmachine: (addons-840762)     <type>hvm</type>
	I0520 12:54:51.763665  610501 main.go:141] libmachine: (addons-840762)     <boot dev='cdrom'/>
	I0520 12:54:51.763669  610501 main.go:141] libmachine: (addons-840762)     <boot dev='hd'/>
	I0520 12:54:51.763678  610501 main.go:141] libmachine: (addons-840762)     <bootmenu enable='no'/>
	I0520 12:54:51.763688  610501 main.go:141] libmachine: (addons-840762)   </os>
	I0520 12:54:51.763701  610501 main.go:141] libmachine: (addons-840762)   <devices>
	I0520 12:54:51.763709  610501 main.go:141] libmachine: (addons-840762)     <disk type='file' device='cdrom'>
	I0520 12:54:51.763728  610501 main.go:141] libmachine: (addons-840762)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/boot2docker.iso'/>
	I0520 12:54:51.763746  610501 main.go:141] libmachine: (addons-840762)       <target dev='hdc' bus='scsi'/>
	I0520 12:54:51.763754  610501 main.go:141] libmachine: (addons-840762)       <readonly/>
	I0520 12:54:51.763758  610501 main.go:141] libmachine: (addons-840762)     </disk>
	I0520 12:54:51.763770  610501 main.go:141] libmachine: (addons-840762)     <disk type='file' device='disk'>
	I0520 12:54:51.763779  610501 main.go:141] libmachine: (addons-840762)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 12:54:51.763793  610501 main.go:141] libmachine: (addons-840762)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/addons-840762.rawdisk'/>
	I0520 12:54:51.763806  610501 main.go:141] libmachine: (addons-840762)       <target dev='hda' bus='virtio'/>
	I0520 12:54:51.763814  610501 main.go:141] libmachine: (addons-840762)     </disk>
	I0520 12:54:51.763826  610501 main.go:141] libmachine: (addons-840762)     <interface type='network'>
	I0520 12:54:51.763839  610501 main.go:141] libmachine: (addons-840762)       <source network='mk-addons-840762'/>
	I0520 12:54:51.763850  610501 main.go:141] libmachine: (addons-840762)       <model type='virtio'/>
	I0520 12:54:51.763859  610501 main.go:141] libmachine: (addons-840762)     </interface>
	I0520 12:54:51.763868  610501 main.go:141] libmachine: (addons-840762)     <interface type='network'>
	I0520 12:54:51.763874  610501 main.go:141] libmachine: (addons-840762)       <source network='default'/>
	I0520 12:54:51.763886  610501 main.go:141] libmachine: (addons-840762)       <model type='virtio'/>
	I0520 12:54:51.763898  610501 main.go:141] libmachine: (addons-840762)     </interface>
	I0520 12:54:51.763910  610501 main.go:141] libmachine: (addons-840762)     <serial type='pty'>
	I0520 12:54:51.763921  610501 main.go:141] libmachine: (addons-840762)       <target port='0'/>
	I0520 12:54:51.763931  610501 main.go:141] libmachine: (addons-840762)     </serial>
	I0520 12:54:51.763942  610501 main.go:141] libmachine: (addons-840762)     <console type='pty'>
	I0520 12:54:51.763953  610501 main.go:141] libmachine: (addons-840762)       <target type='serial' port='0'/>
	I0520 12:54:51.763964  610501 main.go:141] libmachine: (addons-840762)     </console>
	I0520 12:54:51.763972  610501 main.go:141] libmachine: (addons-840762)     <rng model='virtio'>
	I0520 12:54:51.763982  610501 main.go:141] libmachine: (addons-840762)       <backend model='random'>/dev/random</backend>
	I0520 12:54:51.763993  610501 main.go:141] libmachine: (addons-840762)     </rng>
	I0520 12:54:51.764002  610501 main.go:141] libmachine: (addons-840762)     
	I0520 12:54:51.764015  610501 main.go:141] libmachine: (addons-840762)     
	I0520 12:54:51.764028  610501 main.go:141] libmachine: (addons-840762)   </devices>
	I0520 12:54:51.764043  610501 main.go:141] libmachine: (addons-840762) </domain>
	I0520 12:54:51.764055  610501 main.go:141] libmachine: (addons-840762) 
	I0520 12:54:51.768989  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:fb:9f:32 in network default
	I0520 12:54:51.769612  610501 main.go:141] libmachine: (addons-840762) Ensuring networks are active...
	I0520 12:54:51.769643  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:51.770275  610501 main.go:141] libmachine: (addons-840762) Ensuring network default is active
	I0520 12:54:51.770537  610501 main.go:141] libmachine: (addons-840762) Ensuring network mk-addons-840762 is active
	I0520 12:54:51.770983  610501 main.go:141] libmachine: (addons-840762) Getting domain xml...
	I0520 12:54:51.771663  610501 main.go:141] libmachine: (addons-840762) Creating domain...
	I0520 12:54:52.966989  610501 main.go:141] libmachine: (addons-840762) Waiting to get IP...
	I0520 12:54:52.967844  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:52.968374  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:52.968400  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:52.968341  610539 retry.go:31] will retry after 245.330251ms: waiting for machine to come up
	I0520 12:54:53.215880  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:53.216390  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:53.216416  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:53.216352  610539 retry.go:31] will retry after 286.616472ms: waiting for machine to come up
	I0520 12:54:53.505129  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:53.505630  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:53.505658  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:53.505618  610539 retry.go:31] will retry after 312.787625ms: waiting for machine to come up
	I0520 12:54:53.820350  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:53.820828  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:53.820859  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:53.820772  610539 retry.go:31] will retry after 375.629067ms: waiting for machine to come up
	I0520 12:54:54.198230  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:54.198645  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:54.198678  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:54.198600  610539 retry.go:31] will retry after 558.50452ms: waiting for machine to come up
	I0520 12:54:54.758250  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:54.758836  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:54.758867  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:54.758777  610539 retry.go:31] will retry after 772.745392ms: waiting for machine to come up
	I0520 12:54:55.532754  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:55.533179  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:55.533205  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:55.533125  610539 retry.go:31] will retry after 1.015067234s: waiting for machine to come up
	I0520 12:54:56.549875  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:56.550336  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:56.550366  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:56.550270  610539 retry.go:31] will retry after 1.340438643s: waiting for machine to come up
	I0520 12:54:57.892757  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:57.893191  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:57.893226  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:57.893143  610539 retry.go:31] will retry after 1.779000898s: waiting for machine to come up
	I0520 12:54:59.674439  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:59.674849  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:59.674878  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:59.674795  610539 retry.go:31] will retry after 1.912219697s: waiting for machine to come up
	I0520 12:55:01.588719  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:01.589170  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:55:01.589211  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:55:01.589118  610539 retry.go:31] will retry after 2.779568547s: waiting for machine to come up
	I0520 12:55:04.372082  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:04.372519  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:55:04.372543  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:55:04.372481  610539 retry.go:31] will retry after 2.436821512s: waiting for machine to come up
	I0520 12:55:06.810430  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:06.810907  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:55:06.810932  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:55:06.810869  610539 retry.go:31] will retry after 4.499322165s: waiting for machine to come up
	I0520 12:55:11.311574  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.311986  610501 main.go:141] libmachine: (addons-840762) Found IP for machine: 192.168.39.19
	I0520 12:55:11.312007  610501 main.go:141] libmachine: (addons-840762) Reserving static IP address...
	I0520 12:55:11.312017  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has current primary IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.312416  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find host DHCP lease matching {name: "addons-840762", mac: "52:54:00:0f:4e:d2", ip: "192.168.39.19"} in network mk-addons-840762
	I0520 12:55:11.448691  610501 main.go:141] libmachine: (addons-840762) DBG | Getting to WaitForSSH function...
	I0520 12:55:11.448724  610501 main.go:141] libmachine: (addons-840762) Reserved static IP address: 192.168.39.19
	I0520 12:55:11.448738  610501 main.go:141] libmachine: (addons-840762) Waiting for SSH to be available...
	I0520 12:55:11.451103  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.451496  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:11.451530  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.451644  610501 main.go:141] libmachine: (addons-840762) DBG | Using SSH client type: external
	I0520 12:55:11.451668  610501 main.go:141] libmachine: (addons-840762) DBG | Using SSH private key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa (-rw-------)
	I0520 12:55:11.451710  610501 main.go:141] libmachine: (addons-840762) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 12:55:11.451725  610501 main.go:141] libmachine: (addons-840762) DBG | About to run SSH command:
	I0520 12:55:11.451742  610501 main.go:141] libmachine: (addons-840762) DBG | exit 0
	I0520 12:55:11.581117  610501 main.go:141] libmachine: (addons-840762) DBG | SSH cmd err, output: <nil>: 
	I0520 12:55:11.581495  610501 main.go:141] libmachine: (addons-840762) KVM machine creation complete!
	I0520 12:55:11.581804  610501 main.go:141] libmachine: (addons-840762) Calling .GetConfigRaw
	I0520 12:55:11.616351  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:11.616704  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:11.616919  610501 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 12:55:11.616938  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:11.618424  610501 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 12:55:11.618443  610501 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 12:55:11.618453  610501 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 12:55:11.618462  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:11.620876  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.621298  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:11.621331  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.621539  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:11.621744  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.621950  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.622137  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:11.622327  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:11.622536  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:11.622550  610501 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 12:55:11.732457  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:55:11.732485  610501 main.go:141] libmachine: Detecting the provisioner...
	I0520 12:55:11.732494  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:11.736096  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.736526  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:11.736565  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.736781  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:11.737000  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.737207  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.737385  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:11.737562  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:11.737730  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:11.737740  610501 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 12:55:11.846191  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 12:55:11.846307  610501 main.go:141] libmachine: found compatible host: buildroot
	I0520 12:55:11.846320  610501 main.go:141] libmachine: Provisioning with buildroot...
	I0520 12:55:11.846331  610501 main.go:141] libmachine: (addons-840762) Calling .GetMachineName
	I0520 12:55:11.846646  610501 buildroot.go:166] provisioning hostname "addons-840762"
	I0520 12:55:11.846679  610501 main.go:141] libmachine: (addons-840762) Calling .GetMachineName
	I0520 12:55:11.846901  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:11.849576  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.850003  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:11.850032  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.850162  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:11.850370  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.850550  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.850706  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:11.850877  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:11.851054  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:11.851066  610501 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-840762 && echo "addons-840762" | sudo tee /etc/hostname
	I0520 12:55:11.976542  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-840762
	
	I0520 12:55:11.976570  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:11.979683  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.979984  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:11.980011  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.980169  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:11.980409  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.980578  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.980706  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:11.980890  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:11.981083  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:11.981099  610501 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-840762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-840762/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-840762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 12:55:12.102001  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:55:12.102048  610501 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 12:55:12.102072  610501 buildroot.go:174] setting up certificates
	I0520 12:55:12.102083  610501 provision.go:84] configureAuth start
	I0520 12:55:12.102092  610501 main.go:141] libmachine: (addons-840762) Calling .GetMachineName
	I0520 12:55:12.102454  610501 main.go:141] libmachine: (addons-840762) Calling .GetIP
	I0520 12:55:12.105413  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.105813  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.105841  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.106053  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.108107  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.108401  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.108434  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.108544  610501 provision.go:143] copyHostCerts
	I0520 12:55:12.108615  610501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 12:55:12.108744  610501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 12:55:12.108804  610501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 12:55:12.108851  610501 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.addons-840762 san=[127.0.0.1 192.168.39.19 addons-840762 localhost minikube]
	I0520 12:55:12.292779  610501 provision.go:177] copyRemoteCerts
	I0520 12:55:12.292840  610501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 12:55:12.292869  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.295591  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.295908  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.295936  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.296100  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.296359  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.296512  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.296659  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:12.382793  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 12:55:12.406307  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 12:55:12.428152  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 12:55:12.450174  610501 provision.go:87] duration metric: took 348.071182ms to configureAuth
	I0520 12:55:12.450217  610501 buildroot.go:189] setting minikube options for container-runtime
	I0520 12:55:12.450425  610501 config.go:182] Loaded profile config "addons-840762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:55:12.450508  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.453476  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.453934  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.453969  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.454114  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.454327  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.454542  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.454671  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.454839  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:12.455084  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:12.455101  610501 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 12:55:12.724253  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 12:55:12.724287  610501 main.go:141] libmachine: Checking connection to Docker...
	I0520 12:55:12.724297  610501 main.go:141] libmachine: (addons-840762) Calling .GetURL
	I0520 12:55:12.725626  610501 main.go:141] libmachine: (addons-840762) DBG | Using libvirt version 6000000
	I0520 12:55:12.728077  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.728460  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.728490  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.728650  610501 main.go:141] libmachine: Docker is up and running!
	I0520 12:55:12.728678  610501 main.go:141] libmachine: Reticulating splines...
	I0520 12:55:12.728688  610501 client.go:171] duration metric: took 21.879272392s to LocalClient.Create
	I0520 12:55:12.728716  610501 start.go:167] duration metric: took 21.879341856s to libmachine.API.Create "addons-840762"
	I0520 12:55:12.728725  610501 start.go:293] postStartSetup for "addons-840762" (driver="kvm2")
	I0520 12:55:12.728742  610501 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 12:55:12.728761  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:12.729013  610501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 12:55:12.729042  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.731260  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.731556  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.731576  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.731738  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.731952  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.732118  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.732284  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:12.815344  610501 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 12:55:12.819138  610501 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 12:55:12.819172  610501 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 12:55:12.819249  610501 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 12:55:12.819273  610501 start.go:296] duration metric: took 90.538988ms for postStartSetup
	I0520 12:55:12.819320  610501 main.go:141] libmachine: (addons-840762) Calling .GetConfigRaw
	I0520 12:55:12.819902  610501 main.go:141] libmachine: (addons-840762) Calling .GetIP
	I0520 12:55:12.822344  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.822666  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.822698  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.822886  610501 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/config.json ...
	I0520 12:55:12.823055  610501 start.go:128] duration metric: took 21.993143462s to createHost
	I0520 12:55:12.823077  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.825156  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.825572  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.825598  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.825816  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.826086  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.826305  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.826500  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.826715  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:12.826884  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:12.826895  610501 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 12:55:12.937875  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716209712.902821410
	
	I0520 12:55:12.937911  610501 fix.go:216] guest clock: 1716209712.902821410
	I0520 12:55:12.937923  610501 fix.go:229] Guest: 2024-05-20 12:55:12.90282141 +0000 UTC Remote: 2024-05-20 12:55:12.823066987 +0000 UTC m=+22.107122705 (delta=79.754423ms)
	I0520 12:55:12.937959  610501 fix.go:200] guest clock delta is within tolerance: 79.754423ms
	I0520 12:55:12.937968  610501 start.go:83] releasing machines lock for "addons-840762", held for 22.108141971s
	I0520 12:55:12.937999  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:12.938309  610501 main.go:141] libmachine: (addons-840762) Calling .GetIP
	I0520 12:55:12.941417  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.941810  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.941840  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.941966  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:12.942466  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:12.942664  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:12.942768  610501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 12:55:12.942823  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.942897  610501 ssh_runner.go:195] Run: cat /version.json
	I0520 12:55:12.942918  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.945235  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.945541  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.945560  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.945578  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.945756  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.945928  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.946081  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.946102  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.946103  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.946236  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.946316  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:12.946449  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.946595  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.946736  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	W0520 12:55:13.060984  610501 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 12:55:13.061095  610501 ssh_runner.go:195] Run: systemctl --version
	I0520 12:55:13.067028  610501 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 12:55:13.231228  610501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 12:55:13.237522  610501 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 12:55:13.237591  610501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 12:55:13.252624  610501 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 12:55:13.252647  610501 start.go:494] detecting cgroup driver to use...
	I0520 12:55:13.252707  610501 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 12:55:13.267587  610501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 12:55:13.282311  610501 docker.go:217] disabling cri-docker service (if available) ...
	I0520 12:55:13.282382  610501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 12:55:13.296303  610501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 12:55:13.309620  610501 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 12:55:13.423597  610501 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 12:55:13.589483  610501 docker.go:233] disabling docker service ...
	I0520 12:55:13.589574  610501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 12:55:13.603417  610501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 12:55:13.615738  610501 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 12:55:13.729481  610501 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 12:55:13.860853  610501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 12:55:13.873990  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 12:55:13.891599  610501 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 12:55:13.891677  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.901887  610501 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 12:55:13.901958  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.912206  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.922183  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.931875  610501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 12:55:13.941703  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.951407  610501 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.967696  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.977475  610501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 12:55:13.986454  610501 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 12:55:13.986509  610501 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 12:55:13.998511  610501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 12:55:14.007925  610501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:55:14.124297  610501 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 12:55:14.265547  610501 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 12:55:14.265641  610501 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 12:55:14.270847  610501 start.go:562] Will wait 60s for crictl version
	I0520 12:55:14.270917  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:55:14.274825  610501 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 12:55:14.318641  610501 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 12:55:14.318754  610501 ssh_runner.go:195] Run: crio --version
	I0520 12:55:14.346323  610501 ssh_runner.go:195] Run: crio --version
	I0520 12:55:14.377643  610501 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 12:55:14.379895  610501 main.go:141] libmachine: (addons-840762) Calling .GetIP
	I0520 12:55:14.382720  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:14.383143  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:14.383180  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:14.383427  610501 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 12:55:14.387501  610501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:55:14.399548  610501 kubeadm.go:877] updating cluster {Name:addons-840762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-840762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 12:55:14.399660  610501 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:55:14.399703  610501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 12:55:14.429577  610501 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 12:55:14.429652  610501 ssh_runner.go:195] Run: which lz4
	I0520 12:55:14.433365  610501 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 12:55:14.437014  610501 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 12:55:14.437053  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 12:55:15.637746  610501 crio.go:462] duration metric: took 1.204422377s to copy over tarball
	I0520 12:55:15.637823  610501 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 12:55:17.802635  610501 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.164782874s)
	I0520 12:55:17.802675  610501 crio.go:469] duration metric: took 2.164898269s to extract the tarball
	I0520 12:55:17.802686  610501 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 12:55:17.838706  610501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 12:55:17.877747  610501 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 12:55:17.877773  610501 cache_images.go:84] Images are preloaded, skipping loading
	I0520 12:55:17.877783  610501 kubeadm.go:928] updating node { 192.168.39.19 8443 v1.30.1 crio true true} ...
	I0520 12:55:17.877923  610501 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-840762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-840762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 12:55:17.878011  610501 ssh_runner.go:195] Run: crio config
	I0520 12:55:17.922732  610501 cni.go:84] Creating CNI manager for ""
	I0520 12:55:17.922758  610501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 12:55:17.922785  610501 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 12:55:17.922825  610501 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-840762 NodeName:addons-840762 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 12:55:17.922996  610501 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-840762"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 12:55:17.923077  610501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 12:55:17.932833  610501 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 12:55:17.932937  610501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 12:55:17.941978  610501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0520 12:55:17.957376  610501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 12:55:17.972370  610501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0520 12:55:17.987265  610501 ssh_runner.go:195] Run: grep 192.168.39.19	control-plane.minikube.internal$ /etc/hosts
	I0520 12:55:17.990708  610501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:55:18.001573  610501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:55:18.127654  610501 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:55:18.143797  610501 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762 for IP: 192.168.39.19
	I0520 12:55:18.143820  610501 certs.go:194] generating shared ca certs ...
	I0520 12:55:18.143842  610501 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.144003  610501 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 12:55:18.358697  610501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt ...
	I0520 12:55:18.358733  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt: {Name:mk0337969521f8fcb91840a13b9dacd1361e0416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.358935  610501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key ...
	I0520 12:55:18.358950  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key: {Name:mk0b3018854c3a76c6bc712c400145554051e5cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.359066  610501 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 12:55:18.637573  610501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt ...
	I0520 12:55:18.637611  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt: {Name:mk4030326ff4bd93acf0ae11bc67ee09461f2725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.637793  610501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key ...
	I0520 12:55:18.637804  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key: {Name:mk368b7d66fa86a67c9ef13f55a63c8fbe995e35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.637889  610501 certs.go:256] generating profile certs ...
	I0520 12:55:18.637948  610501 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.key
	I0520 12:55:18.637962  610501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt with IP's: []
	I0520 12:55:18.765434  610501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt ...
	I0520 12:55:18.765467  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: {Name:mk555ad1a22ae83e71bd1d88db4cd731d3a9df3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.765635  610501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.key ...
	I0520 12:55:18.765646  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.key: {Name:mkc4037f80e62a174b1c3df78060c4c466e65958 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.765712  610501 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key.1a6be2da
	I0520 12:55:18.765730  610501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt.1a6be2da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19]
	I0520 12:55:18.937615  610501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt.1a6be2da ...
	I0520 12:55:18.937656  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt.1a6be2da: {Name:mk5a01215158cf3231fad08bb78d8a3dfa212c05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.937851  610501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key.1a6be2da ...
	I0520 12:55:18.937873  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key.1a6be2da: {Name:mk298b016f1b857a88dbdb4cbaadf8e747393b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.937973  610501 certs.go:381] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt.1a6be2da -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt
	I0520 12:55:18.938079  610501 certs.go:385] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key.1a6be2da -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key
	I0520 12:55:18.938151  610501 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.key
	I0520 12:55:18.938179  610501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.crt with IP's: []
	I0520 12:55:19.226331  610501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.crt ...
	I0520 12:55:19.226369  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.crt: {Name:mk192ed701b920896d7fa7fbd1cf8e177461df3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:19.226564  610501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.key ...
	I0520 12:55:19.226582  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.key: {Name:mk3ad4b89a8ee430000e1f8b8ab63f33e943010e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:19.226798  610501 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 12:55:19.226843  610501 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 12:55:19.226878  610501 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 12:55:19.226916  610501 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 12:55:19.227551  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 12:55:19.253380  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 12:55:19.275654  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 12:55:19.297712  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 12:55:19.319707  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0520 12:55:19.341205  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 12:55:19.365239  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 12:55:19.390731  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 12:55:19.416007  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 12:55:19.438628  610501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 12:55:19.454417  610501 ssh_runner.go:195] Run: openssl version
	I0520 12:55:19.459803  610501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 12:55:19.471875  610501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:55:19.476597  610501 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:55:19.476677  610501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:55:19.483260  610501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 12:55:19.497343  610501 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 12:55:19.501416  610501 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 12:55:19.501498  610501 kubeadm.go:391] StartCluster: {Name:addons-840762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-840762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:55:19.501602  610501 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 12:55:19.501684  610501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 12:55:19.545075  610501 cri.go:89] found id: ""
	I0520 12:55:19.545173  610501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 12:55:19.554806  610501 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 12:55:19.568214  610501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 12:55:19.577374  610501 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 12:55:19.577399  610501 kubeadm.go:156] found existing configuration files:
	
	I0520 12:55:19.577443  610501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 12:55:19.585694  610501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 12:55:19.585763  610501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 12:55:19.594289  610501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 12:55:19.602494  610501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 12:55:19.602553  610501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 12:55:19.611323  610501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 12:55:19.619340  610501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 12:55:19.619399  610501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 12:55:19.628227  610501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 12:55:19.636652  610501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 12:55:19.636728  610501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 12:55:19.645298  610501 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 12:55:19.702471  610501 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 12:55:19.702580  610501 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 12:55:19.825588  610501 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 12:55:19.825748  610501 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 12:55:19.825886  610501 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 12:55:20.025596  610501 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 12:55:20.083699  610501 out.go:204]   - Generating certificates and keys ...
	I0520 12:55:20.083850  610501 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 12:55:20.083934  610501 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 12:55:20.092217  610501 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 12:55:20.364436  610501 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 12:55:20.502138  610501 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 12:55:20.564527  610501 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 12:55:20.703162  610501 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 12:55:20.703407  610501 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-840762 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I0520 12:55:20.770361  610501 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 12:55:20.884233  610501 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-840762 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I0520 12:55:21.012631  610501 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 12:55:21.208632  610501 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 12:55:21.332544  610501 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 12:55:21.332752  610501 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 12:55:21.589278  610501 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 12:55:21.706399  610501 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 12:55:21.812525  610501 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 12:55:21.987255  610501 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 12:55:22.050057  610501 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 12:55:22.050588  610501 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 12:55:22.054797  610501 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 12:55:22.057239  610501 out.go:204]   - Booting up control plane ...
	I0520 12:55:22.057342  610501 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 12:55:22.057410  610501 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 12:55:22.057492  610501 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 12:55:22.071354  610501 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 12:55:22.072252  610501 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 12:55:22.072345  610501 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 12:55:22.194444  610501 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 12:55:22.194562  610501 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 12:55:23.195085  610501 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001405192s
	I0520 12:55:23.195201  610501 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 12:55:28.694415  610501 kubeadm.go:309] [api-check] The API server is healthy after 5.502847931s
	I0520 12:55:28.714022  610501 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 12:55:28.726753  610501 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 12:55:28.761883  610501 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 12:55:28.762170  610501 kubeadm.go:309] [mark-control-plane] Marking the node addons-840762 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 12:55:28.775335  610501 kubeadm.go:309] [bootstrap-token] Using token: ujdvgq.4r4gsjxdolox8f2t
	I0520 12:55:28.777700  610501 out.go:204]   - Configuring RBAC rules ...
	I0520 12:55:28.777840  610501 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 12:55:28.782202  610501 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 12:55:28.794168  610501 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 12:55:28.797442  610501 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 12:55:28.800674  610501 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 12:55:28.804165  610501 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 12:55:29.101623  610501 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 12:55:29.550656  610501 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 12:55:30.105708  610501 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 12:55:30.106638  610501 kubeadm.go:309] 
	I0520 12:55:30.106743  610501 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 12:55:30.106763  610501 kubeadm.go:309] 
	I0520 12:55:30.106876  610501 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 12:55:30.106899  610501 kubeadm.go:309] 
	I0520 12:55:30.106949  610501 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 12:55:30.107030  610501 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 12:55:30.107100  610501 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 12:55:30.107110  610501 kubeadm.go:309] 
	I0520 12:55:30.107159  610501 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 12:55:30.107165  610501 kubeadm.go:309] 
	I0520 12:55:30.107205  610501 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 12:55:30.107211  610501 kubeadm.go:309] 
	I0520 12:55:30.107253  610501 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 12:55:30.107333  610501 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 12:55:30.107424  610501 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 12:55:30.107431  610501 kubeadm.go:309] 
	I0520 12:55:30.107535  610501 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 12:55:30.107635  610501 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 12:55:30.107644  610501 kubeadm.go:309] 
	I0520 12:55:30.107756  610501 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ujdvgq.4r4gsjxdolox8f2t \
	I0520 12:55:30.107892  610501 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa \
	I0520 12:55:30.107936  610501 kubeadm.go:309] 	--control-plane 
	I0520 12:55:30.107945  610501 kubeadm.go:309] 
	I0520 12:55:30.108063  610501 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 12:55:30.108079  610501 kubeadm.go:309] 
	I0520 12:55:30.108173  610501 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ujdvgq.4r4gsjxdolox8f2t \
	I0520 12:55:30.108271  610501 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa 
	I0520 12:55:30.108549  610501 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 12:55:30.108578  610501 cni.go:84] Creating CNI manager for ""
	I0520 12:55:30.108590  610501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 12:55:30.111265  610501 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 12:55:30.113507  610501 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 12:55:30.123451  610501 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 12:55:30.139800  610501 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 12:55:30.139944  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-840762 minikube.k8s.io/updated_at=2024_05_20T12_55_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45 minikube.k8s.io/name=addons-840762 minikube.k8s.io/primary=true
	I0520 12:55:30.139947  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:30.244780  610501 ops.go:34] apiserver oom_adj: -16
	I0520 12:55:30.244858  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:30.745128  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:31.245492  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:31.745341  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:32.244914  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:32.745755  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:33.245160  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:33.745226  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:34.245731  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:34.745905  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:35.245566  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:35.745226  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:36.245227  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:36.745121  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:37.245280  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:37.745226  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:38.245665  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:38.745064  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:39.245512  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:39.745828  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:40.245009  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:40.745277  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:41.245343  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:41.745342  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:42.245464  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:42.745186  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:42.919515  610501 kubeadm.go:1107] duration metric: took 12.779637158s to wait for elevateKubeSystemPrivileges
	W0520 12:55:42.919570  610501 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 12:55:42.919582  610501 kubeadm.go:393] duration metric: took 23.418090172s to StartCluster
	I0520 12:55:42.919607  610501 settings.go:142] acquiring lock: {Name:mkfd2b5ade573ba8a1562334a87bc71c9f86de47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:42.919772  610501 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 12:55:42.920344  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/kubeconfig: {Name:mkf2ddc28a03db07b0a9823212c8450f3ae4cfa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:42.920956  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 12:55:42.921004  610501 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:55:42.923778  610501 out.go:177] * Verifying Kubernetes components...
	I0520 12:55:42.921047  610501 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0520 12:55:42.921275  610501 config.go:182] Loaded profile config "addons-840762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:55:42.926173  610501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:55:42.926185  610501 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-840762"
	I0520 12:55:42.926207  610501 addons.go:69] Setting inspektor-gadget=true in profile "addons-840762"
	I0520 12:55:42.926220  610501 addons.go:69] Setting metrics-server=true in profile "addons-840762"
	I0520 12:55:42.926235  610501 addons.go:69] Setting helm-tiller=true in profile "addons-840762"
	I0520 12:55:42.926254  610501 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-840762"
	I0520 12:55:42.926257  610501 addons.go:69] Setting cloud-spanner=true in profile "addons-840762"
	I0520 12:55:42.926263  610501 addons.go:69] Setting ingress-dns=true in profile "addons-840762"
	I0520 12:55:42.926270  610501 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-840762"
	I0520 12:55:42.926271  610501 addons.go:69] Setting storage-provisioner=true in profile "addons-840762"
	I0520 12:55:42.926277  610501 addons.go:234] Setting addon cloud-spanner=true in "addons-840762"
	I0520 12:55:42.926279  610501 addons.go:69] Setting gcp-auth=true in profile "addons-840762"
	I0520 12:55:42.926283  610501 addons.go:234] Setting addon ingress-dns=true in "addons-840762"
	I0520 12:55:42.926284  610501 addons.go:69] Setting default-storageclass=true in profile "addons-840762"
	I0520 12:55:42.926297  610501 mustload.go:65] Loading cluster: addons-840762
	I0520 12:55:42.926305  610501 addons.go:69] Setting registry=true in profile "addons-840762"
	I0520 12:55:42.926313  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926319  610501 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-840762"
	I0520 12:55:42.926323  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926321  610501 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-840762"
	I0520 12:55:42.926335  610501 addons.go:234] Setting addon registry=true in "addons-840762"
	I0520 12:55:42.926338  610501 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-840762"
	I0520 12:55:42.926364  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926510  610501 config.go:182] Loaded profile config "addons-840762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:55:42.926801  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.926249  610501 addons.go:234] Setting addon inspektor-gadget=true in "addons-840762"
	I0520 12:55:42.926249  610501 addons.go:234] Setting addon metrics-server=true in "addons-840762"
	I0520 12:55:42.926856  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.926862  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.926869  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926877  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.926889  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926258  610501 addons.go:69] Setting ingress=true in profile "addons-840762"
	I0520 12:55:42.926904  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.926907  610501 addons.go:69] Setting volumesnapshots=true in profile "addons-840762"
	I0520 12:55:42.926926  610501 addons.go:234] Setting addon ingress=true in "addons-840762"
	I0520 12:55:42.926932  610501 addons.go:234] Setting addon volumesnapshots=true in "addons-840762"
	I0520 12:55:42.926956  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926960  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926250  610501 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-840762"
	I0520 12:55:42.927007  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.927203  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927223  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927277  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927304  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927313  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927321  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.926840  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927342  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.926273  610501 addons.go:234] Setting addon helm-tiller=true in "addons-840762"
	I0520 12:55:42.927353  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.926299  610501 addons.go:234] Setting addon storage-provisioner=true in "addons-840762"
	I0520 12:55:42.927324  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927371  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.926313  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.927403  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927420  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927438  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.926211  610501 addons.go:69] Setting yakd=true in profile "addons-840762"
	I0520 12:55:42.927468  610501 addons.go:234] Setting addon yakd=true in "addons-840762"
	I0520 12:55:42.927472  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927519  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.927850  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927890  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927962  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.928030  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.928378  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.928410  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.928472  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.928500  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.949431  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0520 12:55:42.949456  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44125
	I0520 12:55:42.949517  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42323
	I0520 12:55:42.949805  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0520 12:55:42.950251  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.950259  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.950280  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.950304  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.961815  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.961998  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.962130  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.962181  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I0520 12:55:42.962318  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0520 12:55:42.962475  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.962887  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.963010  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.963210  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.963226  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.963369  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.963380  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.963502  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.963513  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.963640  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.963651  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.963820  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.964552  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.964602  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.964934  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.964957  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.965029  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.965087  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.965217  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.965230  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.965317  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.965630  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.965679  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.965788  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.966394  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.966436  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.966662  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.966702  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:42.967039  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.967085  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.967295  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.967336  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.968919  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34005
	I0520 12:55:42.969170  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.969564  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.969595  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.969824  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.970420  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.970440  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.970891  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.971471  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.971504  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.983702  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38217
	I0520 12:55:42.989821  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.990621  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.990649  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.991055  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.991712  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.991761  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.002410  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33797
	I0520 12:55:43.003132  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.003287  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I0520 12:55:43.003423  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
	I0520 12:55:43.003921  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.004372  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I0520 12:55:43.004660  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.004675  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.004807  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.004818  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.004868  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.005179  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.005279  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.005691  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.005760  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0520 12:55:43.006499  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.006546  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.006783  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.007377  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.007400  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.007554  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.007567  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.008005  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.008037  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.008289  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34045
	I0520 12:55:43.008399  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.008419  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.008780  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.008992  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.009055  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.009063  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.009221  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.009752  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.009789  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.010310  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38685
	I0520 12:55:43.010592  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.010621  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.011044  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.011105  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.011348  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.011840  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.011881  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.012129  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.012289  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.012304  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.015140  610501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0520 12:55:43.012670  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.016270  610501 addons.go:234] Setting addon default-storageclass=true in "addons-840762"
	I0520 12:55:43.017402  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:43.017801  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.017842  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.020141  610501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 12:55:43.019350  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35553
	I0520 12:55:43.019379  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38743
	I0520 12:55:43.019420  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.021536  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42885
	I0520 12:55:43.022303  610501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 12:55:43.024787  610501 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 12:55:43.024809  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0520 12:55:43.024831  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.023254  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.023306  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.023310  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43629
	I0520 12:55:43.023345  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39199
	I0520 12:55:43.023354  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.026350  610501 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-840762"
	I0520 12:55:43.026398  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:43.026788  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.026828  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.027387  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34397
	I0520 12:55:43.027626  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.027638  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.028051  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.028314  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0520 12:55:43.028592  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.028611  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.029136  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.029215  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.029238  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.029295  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.029296  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.029315  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.029346  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.029505  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.029572  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.029626  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.029815  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.029880  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.030169  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.030776  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.030822  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.031146  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.031163  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.031323  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.034413  610501 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0520 12:55:43.031845  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.031879  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.031970  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.032176  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.032375  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.036749  610501 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 12:55:43.036763  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0520 12:55:43.036787  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.037457  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.037481  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.037723  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.037740  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.037816  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.038160  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.038379  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.038890  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I0520 12:55:43.039115  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37507
	I0520 12:55:43.039514  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.039999  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.040190  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.040214  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.040290  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.040641  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.040675  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.040795  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.040809  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.040858  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.040862  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.043266  610501 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.0
	I0520 12:55:43.041720  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.042600  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.042944  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.043023  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.043541  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.044232  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.044298  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.044797  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.045484  610501 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0520 12:55:43.045497  610501 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0520 12:55:43.045518  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.045599  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.045639  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.045667  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.048013  610501 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0520 12:55:43.046613  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.046718  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.046798  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.048712  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0520 12:55:43.048712  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46763
	I0520 12:55:43.049336  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.050022  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.050433  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.050492  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0520 12:55:43.050855  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.051378  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38695
	I0520 12:55:43.052681  610501 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0520 12:55:43.052723  610501 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0520 12:55:43.052806  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.053613  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.053643  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.054003  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.054050  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.055062  610501 out.go:177]   - Using image docker.io/registry:2.8.3
	I0520 12:55:43.055263  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.055499  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.057312  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.057625  610501 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 12:55:43.058419  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.058451  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.058549  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.059404  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I0520 12:55:43.059425  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0520 12:55:43.059434  610501 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 12:55:43.059639  610501 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0520 12:55:43.059783  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.060180  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.060216  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42655
	I0520 12:55:43.061533  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.061624  610501 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0520 12:55:43.061636  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.061845  610501 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0520 12:55:43.061908  610501 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 12:55:43.061914  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0520 12:55:43.062300  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.063479  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.063614  610501 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 12:55:43.063635  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 12:55:43.063653  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.063658  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0520 12:55:43.063674  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.063734  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.063764  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.063794  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.064448  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.064498  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.064525  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.064620  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.064627  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.064717  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.064761  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.064800  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.065417  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.069387  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.069428  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.069390  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.069457  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.069560  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.069579  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.073328  610501 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0520 12:55:43.070346  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.070518  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.071145  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.071398  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.071491  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.072013  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.072535  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.073487  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.073620  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.074419  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.074767  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.074877  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42271
	I0520 12:55:43.075149  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.076245  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0520 12:55:43.076377  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.076274  610501 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0520 12:55:43.076312  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.076399  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.076485  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.076491  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.076518  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.076262  610501 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 12:55:43.076630  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.076642  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.076689  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.076799  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.076883  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.076944  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.077301  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.078456  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0520 12:55:43.078484  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.078503  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.078554  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.078573  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.078590  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0520 12:55:43.078624  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.078637  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.078804  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.078805  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.078813  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.078827  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.078918  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.079277  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.080942  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0520 12:55:43.080976  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.083192  610501 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0520 12:55:43.083214  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0520 12:55:43.083235  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.083265  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.083750  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.083781  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.083802  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.083820  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.083933  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.084415  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.085529  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0520 12:55:43.086510  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.087336  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.087938  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.088370  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.089371  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0520 12:55:43.088654  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.088714  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.089430  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.089714  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.091686  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0520 12:55:43.091791  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.091960  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.091975  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.093882  610501 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0520 12:55:43.096277  610501 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0520 12:55:43.096308  610501 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0520 12:55:43.096334  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.093957  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.094177  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.094370  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.097828  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43411
	I0520 12:55:43.098616  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0520 12:55:43.098900  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.098969  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.099332  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.099866  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.100798  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0520 12:55:43.102742  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0520 12:55:43.102765  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0520 12:55:43.100830  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.102790  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.100561  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.102802  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.102789  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39027
	I0520 12:55:43.101525  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.102862  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.103030  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.103224  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.103401  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.103410  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.103779  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.103815  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.105233  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.105267  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.105428  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.105719  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.105859  610501 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 12:55:43.105875  610501 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 12:55:43.105887  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.106101  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.106122  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.106160  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.106373  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.106425  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.106575  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.106861  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.107019  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.108154  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.110645  610501 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0520 12:55:43.108938  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.110686  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.109448  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.113428  610501 out.go:177]   - Using image docker.io/busybox:stable
	I0520 12:55:43.115405  610501 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 12:55:43.113449  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.113676  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.115433  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0520 12:55:43.115464  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.115705  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.115895  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.118641  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.119117  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.119150  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.119343  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.119533  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.119694  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.119816  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.573616  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 12:55:43.619918  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 12:55:43.623606  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 12:55:43.643211  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0520 12:55:43.683331  610501 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:55:43.683420  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 12:55:43.685462  610501 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0520 12:55:43.685482  610501 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0520 12:55:43.701839  610501 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0520 12:55:43.701864  610501 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0520 12:55:43.716671  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 12:55:43.728860  610501 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0520 12:55:43.728882  610501 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0520 12:55:43.749092  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 12:55:43.752362  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 12:55:43.759380  610501 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0520 12:55:43.759401  610501 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0520 12:55:43.768880  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0520 12:55:43.768902  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0520 12:55:43.776942  610501 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0520 12:55:43.776981  610501 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0520 12:55:43.794490  610501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 12:55:43.794512  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0520 12:55:43.876312  610501 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0520 12:55:43.876350  610501 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0520 12:55:43.928322  610501 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0520 12:55:43.928352  610501 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0520 12:55:43.980917  610501 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0520 12:55:43.980943  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0520 12:55:43.985401  610501 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0520 12:55:43.985423  610501 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0520 12:55:44.010497  610501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 12:55:44.010530  610501 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 12:55:44.025070  610501 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0520 12:55:44.025103  610501 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0520 12:55:44.025300  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0520 12:55:44.025326  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0520 12:55:44.097831  610501 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0520 12:55:44.097860  610501 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0520 12:55:44.099542  610501 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0520 12:55:44.099567  610501 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0520 12:55:44.109990  610501 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0520 12:55:44.110015  610501 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0520 12:55:44.125277  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0520 12:55:44.152567  610501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 12:55:44.152593  610501 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 12:55:44.183917  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0520 12:55:44.199196  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0520 12:55:44.199234  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0520 12:55:44.278037  610501 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0520 12:55:44.278067  610501 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0520 12:55:44.293166  610501 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0520 12:55:44.293217  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0520 12:55:44.297324  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0520 12:55:44.297351  610501 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0520 12:55:44.315561  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 12:55:44.346264  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0520 12:55:44.346298  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0520 12:55:44.453370  610501 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0520 12:55:44.453396  610501 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0520 12:55:44.510982  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0520 12:55:44.586650  610501 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 12:55:44.586684  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0520 12:55:44.611553  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0520 12:55:44.611584  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0520 12:55:44.726323  610501 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0520 12:55:44.726349  610501 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0520 12:55:44.881456  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0520 12:55:44.881482  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0520 12:55:44.890866  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 12:55:44.927590  610501 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 12:55:44.927619  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0520 12:55:45.137317  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0520 12:55:45.137345  610501 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0520 12:55:45.209075  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 12:55:45.441214  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0520 12:55:45.441241  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0520 12:55:45.828932  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0520 12:55:45.828994  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0520 12:55:46.257170  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 12:55:46.257208  610501 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0520 12:55:46.498819  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 12:55:47.266993  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.693329132s)
	I0520 12:55:47.267056  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:47.267070  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:47.267417  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:47.267482  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:47.267504  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:47.267520  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:47.267530  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:47.267892  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:47.267912  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:50.073084  610501 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0520 12:55:50.073138  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:50.076118  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:50.076632  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:50.076665  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:50.076958  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:50.077217  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:50.077455  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:50.077652  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:50.468021  610501 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0520 12:55:50.521617  610501 addons.go:234] Setting addon gcp-auth=true in "addons-840762"
	I0520 12:55:50.521694  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:50.522184  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:50.522239  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:50.553174  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42275
	I0520 12:55:50.553754  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:50.554480  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:50.554514  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:50.554880  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:50.555571  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:50.555609  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:50.572015  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44627
	I0520 12:55:50.572479  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:50.573041  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:50.573078  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:50.573484  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:50.573698  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:50.575484  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:50.575739  610501 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0520 12:55:50.575769  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:50.579095  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:50.579655  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:50.579690  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:50.579792  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:50.580013  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:50.580346  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:50.580587  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:51.388578  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.768609397s)
	I0520 12:55:51.388647  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.388650  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.765002737s)
	I0520 12:55:51.388698  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.388707  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.745461801s)
	I0520 12:55:51.388717  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.388734  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.388746  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.388661  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.388887  610501 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.705429993s)
	I0520 12:55:51.388915  610501 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0520 12:55:51.388936  610501 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.705565209s)
	I0520 12:55:51.389084  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.389097  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.389107  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.389116  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.389209  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.389232  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.389259  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.389270  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.389296  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.389326  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.389343  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.389349  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.389360  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.389372  610501 addons.go:470] Verifying addon ingress=true in "addons-840762"
	I0520 12:55:51.389379  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.672674787s)
	I0520 12:55:51.389405  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.389425  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.392370  610501 out.go:177] * Verifying ingress addon...
	I0520 12:55:51.389528  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.640405425s)
	I0520 12:55:51.389584  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.637201371s)
	I0520 12:55:51.389624  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.264322968s)
	I0520 12:55:51.389661  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.20570414s)
	I0520 12:55:51.389732  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.074144721s)
	I0520 12:55:51.389772  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.878747953s)
	I0520 12:55:51.389865  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.498962158s)
	I0520 12:55:51.389933  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.180826737s)
	I0520 12:55:51.389965  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.389973  610501 node_ready.go:35] waiting up to 6m0s for node "addons-840762" to be "Ready" ...
	I0520 12:55:51.389991  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.390011  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.389352  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.390014  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.394170  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394193  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394192  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.394207  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394227  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394229  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394240  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394253  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394268  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394281  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394210  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394296  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394300  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394296  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394313  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394328  610501 main.go:141] libmachine: Making call to close driver server
	W0520 12:55:51.394209  610501 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 12:55:51.394339  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394339  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394380  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.394387  610501 retry.go:31] will retry after 303.389823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 12:55:51.395046  610501 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0520 12:55:51.395166  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395197  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395199  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395214  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395218  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395233  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395245  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395262  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395272  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395276  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395280  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.395288  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.395291  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395307  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395313  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395321  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.395263  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395338  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395345  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395354  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395361  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.395367  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.395429  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395448  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395459  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395466  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.395481  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.395204  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395327  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.395347  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.395846  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.396442  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.396480  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.396488  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.398870  610501 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-840762 service yakd-dashboard -n yakd-dashboard
	
	I0520 12:55:51.396611  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.396643  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.396663  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.396677  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.396695  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.396696  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.396721  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.396732  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.396855  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.400153  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.400898  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.400913  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.400902  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.400962  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.400970  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.400973  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.400980  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.400990  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.401004  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.400980  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.401036  610501 addons.go:470] Verifying addon metrics-server=true in "addons-840762"
	I0520 12:55:51.400203  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.401684  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.401704  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.401745  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.402068  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.402086  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.402091  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.402097  610501 addons.go:470] Verifying addon registry=true in "addons-840762"
	I0520 12:55:51.405187  610501 out.go:177] * Verifying registry addon...
	I0520 12:55:51.408123  610501 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0520 12:55:51.437541  610501 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0520 12:55:51.437563  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:51.449131  610501 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0520 12:55:51.449151  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:51.457909  610501 node_ready.go:49] node "addons-840762" has status "Ready":"True"
	I0520 12:55:51.457932  610501 node_ready.go:38] duration metric: took 63.66746ms for node "addons-840762" to be "Ready" ...
	I0520 12:55:51.457941  610501 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 12:55:51.478924  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.478955  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.479239  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.479251  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.479266  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.479268  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.479509  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.479526  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	W0520 12:55:51.479651  610501 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0520 12:55:51.494377  610501 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bxb6r" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.508970  610501 pod_ready.go:92] pod "coredns-7db6d8ff4d-bxb6r" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:51.508991  610501 pod_ready.go:81] duration metric: took 14.583357ms for pod "coredns-7db6d8ff4d-bxb6r" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.509001  610501 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vp4g8" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.544741  610501 pod_ready.go:92] pod "coredns-7db6d8ff4d-vp4g8" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:51.544772  610501 pod_ready.go:81] duration metric: took 35.763404ms for pod "coredns-7db6d8ff4d-vp4g8" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.544784  610501 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.576819  610501 pod_ready.go:92] pod "etcd-addons-840762" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:51.576843  610501 pod_ready.go:81] duration metric: took 32.050234ms for pod "etcd-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.576852  610501 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.592484  610501 pod_ready.go:92] pod "kube-apiserver-addons-840762" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:51.592520  610501 pod_ready.go:81] duration metric: took 15.660119ms for pod "kube-apiserver-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.592536  610501 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.698831  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 12:55:51.797633  610501 pod_ready.go:92] pod "kube-controller-manager-addons-840762" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:51.797657  610501 pod_ready.go:81] duration metric: took 205.113267ms for pod "kube-controller-manager-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.797669  610501 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mpkr9" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.892953  610501 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-840762" context rescaled to 1 replicas
	I0520 12:55:51.899463  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:51.912554  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:52.200864  610501 pod_ready.go:92] pod "kube-proxy-mpkr9" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:52.200894  610501 pod_ready.go:81] duration metric: took 403.210884ms for pod "kube-proxy-mpkr9" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:52.200908  610501 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:52.404611  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:52.417071  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:52.607922  610501 pod_ready.go:92] pod "kube-scheduler-addons-840762" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:52.607946  610501 pod_ready.go:81] duration metric: took 407.031521ms for pod "kube-scheduler-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:52.607957  610501 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:52.938316  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:52.939704  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:53.105590  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.606697582s)
	I0520 12:55:53.105615  610501 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.529845767s)
	I0520 12:55:53.105664  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:53.105679  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:53.108268  610501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 12:55:53.105995  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:53.106025  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:53.110677  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:53.110703  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:53.112892  610501 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0520 12:55:53.110719  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:53.115284  610501 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0520 12:55:53.115305  610501 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0520 12:55:53.115627  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:53.115673  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:53.115691  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:53.115708  610501 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-840762"
	I0520 12:55:53.118485  610501 out.go:177] * Verifying csi-hostpath-driver addon...
	I0520 12:55:53.122364  610501 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0520 12:55:53.138587  610501 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0520 12:55:53.138615  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:53.192835  610501 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0520 12:55:53.192870  610501 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0520 12:55:53.284131  610501 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 12:55:53.284160  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0520 12:55:53.399393  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:53.413779  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:53.418308  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 12:55:53.628280  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:53.677186  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.97829441s)
	I0520 12:55:53.677265  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:53.677280  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:53.677596  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:53.677626  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:53.677630  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:53.677637  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:53.677662  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:53.677944  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:53.677959  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:53.903390  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:53.913905  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:54.129023  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:54.400578  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:54.414433  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:54.634153  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:54.639118  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:55:54.957073  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:54.957497  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:54.969504  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.551140451s)
	I0520 12:55:54.969566  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:54.969580  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:54.969979  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:54.969997  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:54.969998  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:54.970008  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:54.970019  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:54.970333  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:54.970359  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:54.970372  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:54.971645  610501 addons.go:470] Verifying addon gcp-auth=true in "addons-840762"
	I0520 12:55:54.974788  610501 out.go:177] * Verifying gcp-auth addon...
	I0520 12:55:54.977686  610501 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0520 12:55:54.992478  610501 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0520 12:55:54.992501  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:55.127400  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:55.399268  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:55.413367  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:55.481152  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:55.627381  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:55.916014  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:55.918171  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:55.981718  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:56.127730  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:56.399560  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:56.413077  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:56.482224  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:56.627478  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:56.900468  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:56.912466  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:56.981665  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:57.115037  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:55:57.130520  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:57.400035  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:57.413623  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:57.481613  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:57.629820  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:57.900120  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:57.915039  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:57.981464  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:58.127457  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:58.400777  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:58.414573  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:58.481462  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:58.628832  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:58.899601  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:58.914331  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:58.982255  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:59.115366  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:55:59.133101  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:59.401812  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:59.419535  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:59.481225  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:59.631104  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:59.902353  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:59.912317  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:59.981330  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:00.128485  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:00.401561  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:00.430286  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:00.482144  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:00.628293  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:00.899691  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:00.915101  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:00.982008  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:01.129239  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:01.399224  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:01.414726  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:01.481942  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:01.616921  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:01.628729  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:01.900780  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:01.913368  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:01.981214  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:02.127371  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:02.401377  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:02.414207  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:02.482101  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:02.627879  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:02.900216  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:02.914014  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:02.982218  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:03.130013  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:03.400273  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:03.413347  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:03.481203  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:03.629010  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:03.899658  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:03.913498  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:04.022081  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:04.115681  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:04.128931  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:04.399719  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:04.413265  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:04.480949  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:04.630465  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:04.901162  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:04.915827  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:04.982611  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:05.127045  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:05.399804  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:05.413527  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:05.482587  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:05.628542  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:05.900077  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:05.913575  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:05.981299  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:06.131335  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:06.399067  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:06.413005  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:06.482481  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:06.617357  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:06.629066  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:06.899839  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:06.913012  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:06.982047  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:07.132364  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:07.399705  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:07.417400  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:07.481431  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:07.628233  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:07.900194  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:07.912856  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:07.981096  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:08.130863  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:08.399114  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:08.421325  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:08.488216  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:08.626810  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:08.899746  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:08.913412  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:08.981447  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:09.114772  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:09.127612  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:09.399816  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:09.414275  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:09.481644  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:09.628774  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:09.900228  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:09.915686  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:09.983410  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:10.128911  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:10.399503  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:10.413047  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:10.482114  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:10.627627  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:10.900120  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:10.912741  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:10.981653  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:11.127586  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:11.399736  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:11.415842  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:11.482111  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:11.616098  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:11.631401  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:11.899584  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:11.914011  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:11.982488  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:12.133642  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:12.404826  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:12.415781  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:12.482240  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:12.627875  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:12.900429  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:12.913578  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:12.982373  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:13.128350  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:13.400020  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:13.412649  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:13.481828  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:13.627553  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:13.899893  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:13.912654  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:13.981503  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:14.115122  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:14.129175  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:14.400146  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:14.413152  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:14.481089  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:14.628054  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:14.900376  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:14.920739  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:14.982583  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:15.127618  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:15.400262  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:15.415277  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:15.482039  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:15.627946  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:15.900718  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:15.912777  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:15.982140  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:16.129993  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:16.399519  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:16.412993  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:16.482054  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:16.614742  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:16.628387  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:16.902864  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:16.916738  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:16.982514  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:17.127713  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:17.398762  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:17.416228  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:17.481442  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:17.628109  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:17.901062  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:17.915833  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:17.983591  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:18.128602  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:18.400312  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:18.413380  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:18.481469  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:18.627648  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:18.900162  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:18.913170  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:18.981679  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:19.114147  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:19.127641  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:19.399059  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:19.416675  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:19.481893  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:19.628587  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:19.901500  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:19.914861  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:19.982268  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:20.127892  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:20.400086  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:20.412871  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:20.481643  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:20.631895  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:20.899376  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:20.913218  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:20.983029  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:21.115273  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:21.128235  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:21.398928  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:21.412581  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:21.481844  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:21.628150  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:21.899645  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:21.913721  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:21.981633  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:22.127985  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:22.400392  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:22.413600  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:22.482801  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:22.628019  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:22.900239  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:22.913015  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:22.981463  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:23.139117  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:23.140261  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:23.399288  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:23.415368  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:23.481661  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:23.629617  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:23.902440  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:23.915257  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:23.981352  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:24.129929  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:24.399488  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:24.413165  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:24.482158  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:24.627817  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:24.899083  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:24.915671  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:24.981425  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:25.127985  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:25.399318  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:25.413105  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:25.482011  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:25.613886  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:25.627368  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:25.902246  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:25.912609  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:25.981536  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:26.129732  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:26.529301  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:26.529596  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:26.529663  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:26.633421  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:26.901177  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:26.915422  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:26.981413  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:27.127789  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:27.398754  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:27.413042  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:27.482631  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:27.614073  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:27.629448  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:27.900640  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:27.913221  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:27.981368  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:28.132334  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:28.399797  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:28.413632  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:28.481152  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:28.628716  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:28.900159  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:28.914554  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:28.981591  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:29.127504  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:29.399722  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:29.414894  610501 kapi.go:107] duration metric: took 38.006762133s to wait for kubernetes.io/minikube-addons=registry ...
	I0520 12:56:29.481634  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:29.614187  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:29.627857  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:29.899322  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:29.981345  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:30.128550  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:30.400316  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:30.481555  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:30.627746  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:30.900189  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:30.982356  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:31.129538  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:31.400422  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:31.481492  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:31.629916  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:31.899144  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:31.981857  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:32.114220  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:32.127498  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:32.399699  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:32.482072  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:32.651101  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:32.899211  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:32.981322  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:33.127482  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:33.401190  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:33.501374  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:33.628422  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:33.900401  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:33.981380  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:34.127915  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:34.400211  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:34.484293  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:34.614543  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:34.627483  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:34.902843  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:34.981683  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:35.127848  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:35.398956  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:35.481444  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:35.626983  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:35.900313  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:35.980852  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:36.128263  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:36.401318  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:36.482199  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:36.616548  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:36.628510  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:37.039771  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:37.040297  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:37.128332  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:37.399002  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:37.481655  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:37.627644  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:37.900542  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:37.981657  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:38.127698  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:38.399200  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:38.481409  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:38.628445  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:38.899393  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:38.981201  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:39.370826  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:39.372189  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:39.399948  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:39.481855  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:39.627676  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:39.898860  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:39.981735  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:40.128056  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:40.399370  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:40.481858  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:40.628636  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:40.900139  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:40.982329  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:41.130978  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:41.399499  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:41.481032  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:41.614210  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:41.627128  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:41.899422  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:41.981776  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:42.127905  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:42.398936  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:42.481585  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:42.629134  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:42.899492  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:42.982922  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:43.127672  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:43.400155  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:43.481991  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:43.615112  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:43.629339  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:43.899804  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:43.983481  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:44.127535  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:44.399564  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:44.481474  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:44.633982  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:44.899485  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:44.981347  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:45.127532  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:45.413987  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:45.481650  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:45.615259  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:45.629151  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:45.899534  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:45.981133  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:46.127626  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:46.401424  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:46.481108  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:46.626748  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:46.899481  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:46.983910  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:47.127352  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:47.400499  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:47.481216  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:47.629148  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:47.899944  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:47.981178  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:48.114820  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:48.126832  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:48.400385  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:48.481113  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:48.627340  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:48.900317  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:48.982939  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:49.440975  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:49.448941  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:49.483270  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:49.627430  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:49.899374  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:49.983132  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:50.127931  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:50.404223  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:50.482231  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:50.613962  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:50.627506  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:50.901701  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:50.981212  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:51.253571  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:51.400214  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:51.485666  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:51.628816  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:51.899909  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:51.981764  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:52.132414  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:52.400653  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:52.482230  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:52.627845  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:52.901162  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:52.981128  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:53.114152  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:53.127321  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:53.399495  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:53.480504  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:53.627259  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:53.899327  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:53.982045  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:54.126980  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:54.400103  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:54.482185  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:54.630283  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:54.899841  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:54.982038  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:55.127806  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:55.400082  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:55.482058  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:55.614659  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:55.628985  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:55.899964  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:55.981440  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:56.145450  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:56.400153  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:56.481988  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:56.627636  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:56.903212  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:56.985482  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:57.127953  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:57.405938  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:57.480991  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:57.615293  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:57.627790  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:57.899165  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:57.981629  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:58.295639  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:58.401472  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:58.480992  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:58.628426  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:58.899375  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:58.982298  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:59.128070  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:59.399507  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:59.484338  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:59.630551  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:59.636538  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:59.900561  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:59.982224  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:00.129894  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:00.399561  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:00.482729  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:00.627508  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:00.903740  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:00.981954  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:01.133438  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:01.399150  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:01.481779  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:01.630056  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:02.352725  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:02.353084  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:02.353297  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:02.357311  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:02.399678  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:02.481822  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:02.627596  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:02.899845  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:02.981411  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:03.127911  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:03.398988  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:03.481636  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:03.632574  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:03.899755  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:03.981290  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:04.128310  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:04.414840  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:04.481441  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:04.613658  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:04.629956  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:04.901144  610501 kapi.go:107] duration metric: took 1m13.506095567s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0520 12:57:04.981604  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:05.128191  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:05.481173  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:05.628513  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:05.982076  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:06.127702  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:06.481434  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:06.614389  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:06.627307  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:06.981074  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:07.127319  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:07.481753  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:07.627396  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:07.981256  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:08.127837  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:08.483769  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:08.627352  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:09.127470  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:09.132668  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:09.143694  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:09.480949  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:09.627347  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:09.982170  610501 kapi.go:107] duration metric: took 1m15.004478307s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0520 12:57:09.984996  610501 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-840762 cluster.
	I0520 12:57:09.987400  610501 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0520 12:57:09.989848  610501 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0520 12:57:10.128713  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:10.626906  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:11.126993  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:11.615193  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:11.627544  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:12.127562  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:12.627291  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:13.127538  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:13.615932  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:13.627132  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:14.129554  610501 kapi.go:107] duration metric: took 1m21.00719057s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0520 12:57:14.132384  610501 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, cloud-spanner, yakd, helm-tiller, ingress-dns, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0520 12:57:14.134475  610501 addons.go:505] duration metric: took 1m31.2134234s for enable addons: enabled=[storage-provisioner nvidia-device-plugin cloud-spanner yakd helm-tiller ingress-dns metrics-server storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0520 12:57:16.114935  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:18.615065  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:21.115704  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:23.614492  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:25.615476  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:28.115096  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:30.613576  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:32.615824  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:35.114244  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:37.114736  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:39.115280  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:41.616112  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:44.115963  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:46.613676  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:48.615457  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:49.115531  610501 pod_ready.go:92] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"True"
	I0520 12:57:49.115556  610501 pod_ready.go:81] duration metric: took 1m56.507573924s for pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace to be "Ready" ...
	I0520 12:57:49.115567  610501 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-w5d66" in "kube-system" namespace to be "Ready" ...
	I0520 12:57:49.120872  610501 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-w5d66" in "kube-system" namespace has status "Ready":"True"
	I0520 12:57:49.120891  610501 pod_ready.go:81] duration metric: took 5.316291ms for pod "nvidia-device-plugin-daemonset-w5d66" in "kube-system" namespace to be "Ready" ...
	I0520 12:57:49.120917  610501 pod_ready.go:38] duration metric: took 1m57.662965814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 12:57:49.120943  610501 api_server.go:52] waiting for apiserver process to appear ...
	I0520 12:57:49.121015  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 12:57:49.121087  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 12:57:49.196694  610501 cri.go:89] found id: "9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:49.196728  610501 cri.go:89] found id: ""
	I0520 12:57:49.196740  610501 logs.go:276] 1 containers: [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34]
	I0520 12:57:49.196806  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.201213  610501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 12:57:49.201309  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 12:57:49.261920  610501 cri.go:89] found id: "10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:49.261956  610501 cri.go:89] found id: ""
	I0520 12:57:49.261967  610501 logs.go:276] 1 containers: [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e]
	I0520 12:57:49.262042  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.265960  610501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 12:57:49.266026  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 12:57:49.311594  610501 cri.go:89] found id: "7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:49.311616  610501 cri.go:89] found id: ""
	I0520 12:57:49.311624  610501 logs.go:276] 1 containers: [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0]
	I0520 12:57:49.311677  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.315953  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 12:57:49.316040  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 12:57:49.364885  610501 cri.go:89] found id: "6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:49.364924  610501 cri.go:89] found id: ""
	I0520 12:57:49.364932  610501 logs.go:276] 1 containers: [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a]
	I0520 12:57:49.364988  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.369010  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 12:57:49.369072  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 12:57:49.424747  610501 cri.go:89] found id: "a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:49.424768  610501 cri.go:89] found id: ""
	I0520 12:57:49.424776  610501 logs.go:276] 1 containers: [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c]
	I0520 12:57:49.424834  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.428991  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 12:57:49.429080  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 12:57:49.499475  610501 cri.go:89] found id: "6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:49.499510  610501 cri.go:89] found id: ""
	I0520 12:57:49.499523  610501 logs.go:276] 1 containers: [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865]
	I0520 12:57:49.499594  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.504418  610501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 12:57:49.504502  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 12:57:49.561072  610501 cri.go:89] found id: ""
	I0520 12:57:49.561100  610501 logs.go:276] 0 containers: []
	W0520 12:57:49.561113  610501 logs.go:278] No container was found matching "kindnet"
	I0520 12:57:49.561123  610501 logs.go:123] Gathering logs for kubelet ...
	I0520 12:57:49.561138  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 12:57:49.654245  610501 logs.go:123] Gathering logs for etcd [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e] ...
	I0520 12:57:49.654289  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:49.728091  610501 logs.go:123] Gathering logs for coredns [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0] ...
	I0520 12:57:49.728129  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:49.807124  610501 logs.go:123] Gathering logs for kube-controller-manager [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865] ...
	I0520 12:57:49.807159  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:49.880558  610501 logs.go:123] Gathering logs for container status ...
	I0520 12:57:49.880602  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 12:57:49.936020  610501 logs.go:123] Gathering logs for dmesg ...
	I0520 12:57:49.936062  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 12:57:49.950180  610501 logs.go:123] Gathering logs for describe nodes ...
	I0520 12:57:49.950226  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 12:57:50.132293  610501 logs.go:123] Gathering logs for kube-apiserver [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34] ...
	I0520 12:57:50.132328  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:50.176058  610501 logs.go:123] Gathering logs for kube-scheduler [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a] ...
	I0520 12:57:50.176093  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:50.218071  610501 logs.go:123] Gathering logs for kube-proxy [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c] ...
	I0520 12:57:50.218105  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:50.255262  610501 logs.go:123] Gathering logs for CRI-O ...
	I0520 12:57:50.255300  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 12:57:53.392370  610501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:57:53.425325  610501 api_server.go:72] duration metric: took 2m10.504279951s to wait for apiserver process to appear ...
	I0520 12:57:53.425356  610501 api_server.go:88] waiting for apiserver healthz status ...
	I0520 12:57:53.425406  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 12:57:53.425466  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 12:57:53.460785  610501 cri.go:89] found id: "9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:53.460818  610501 cri.go:89] found id: ""
	I0520 12:57:53.460830  610501 logs.go:276] 1 containers: [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34]
	I0520 12:57:53.460890  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.464985  610501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 12:57:53.465054  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 12:57:53.500156  610501 cri.go:89] found id: "10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:53.500182  610501 cri.go:89] found id: ""
	I0520 12:57:53.500192  610501 logs.go:276] 1 containers: [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e]
	I0520 12:57:53.500268  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.504273  610501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 12:57:53.504349  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 12:57:53.542028  610501 cri.go:89] found id: "7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:53.542056  610501 cri.go:89] found id: ""
	I0520 12:57:53.542068  610501 logs.go:276] 1 containers: [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0]
	I0520 12:57:53.542122  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.546279  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 12:57:53.546355  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 12:57:53.583434  610501 cri.go:89] found id: "6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:53.583471  610501 cri.go:89] found id: ""
	I0520 12:57:53.583481  610501 logs.go:276] 1 containers: [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a]
	I0520 12:57:53.583549  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.587699  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 12:57:53.587757  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 12:57:53.629320  610501 cri.go:89] found id: "a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:53.629350  610501 cri.go:89] found id: ""
	I0520 12:57:53.629359  610501 logs.go:276] 1 containers: [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c]
	I0520 12:57:53.629420  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.633673  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 12:57:53.633735  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 12:57:53.670154  610501 cri.go:89] found id: "6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:53.670182  610501 cri.go:89] found id: ""
	I0520 12:57:53.670192  610501 logs.go:276] 1 containers: [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865]
	I0520 12:57:53.670259  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.674100  610501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 12:57:53.674173  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 12:57:53.711324  610501 cri.go:89] found id: ""
	I0520 12:57:53.711357  610501 logs.go:276] 0 containers: []
	W0520 12:57:53.711365  610501 logs.go:278] No container was found matching "kindnet"
	I0520 12:57:53.711380  610501 logs.go:123] Gathering logs for dmesg ...
	I0520 12:57:53.711400  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 12:57:53.730840  610501 logs.go:123] Gathering logs for describe nodes ...
	I0520 12:57:53.730875  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 12:57:53.852051  610501 logs.go:123] Gathering logs for kube-apiserver [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34] ...
	I0520 12:57:53.852082  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:53.901591  610501 logs.go:123] Gathering logs for kube-proxy [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c] ...
	I0520 12:57:53.901628  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:53.941072  610501 logs.go:123] Gathering logs for CRI-O ...
	I0520 12:57:53.941105  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 12:57:54.644393  610501 logs.go:123] Gathering logs for container status ...
	I0520 12:57:54.644441  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 12:57:54.695277  610501 logs.go:123] Gathering logs for kubelet ...
	I0520 12:57:54.695317  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 12:57:54.775974  610501 logs.go:123] Gathering logs for etcd [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e] ...
	I0520 12:57:54.776021  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:54.831859  610501 logs.go:123] Gathering logs for coredns [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0] ...
	I0520 12:57:54.831908  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:54.876969  610501 logs.go:123] Gathering logs for kube-scheduler [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a] ...
	I0520 12:57:54.877020  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:54.931426  610501 logs.go:123] Gathering logs for kube-controller-manager [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865] ...
	I0520 12:57:54.931472  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:57.491119  610501 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I0520 12:57:57.495836  610501 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I0520 12:57:57.497181  610501 api_server.go:141] control plane version: v1.30.1
	I0520 12:57:57.497205  610501 api_server.go:131] duration metric: took 4.071843024s to wait for apiserver health ...
	I0520 12:57:57.497214  610501 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 12:57:57.497235  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 12:57:57.497313  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 12:57:57.534814  610501 cri.go:89] found id: "9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:57.534847  610501 cri.go:89] found id: ""
	I0520 12:57:57.534857  610501 logs.go:276] 1 containers: [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34]
	I0520 12:57:57.534924  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.538897  610501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 12:57:57.538957  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 12:57:57.578468  610501 cri.go:89] found id: "10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:57.578502  610501 cri.go:89] found id: ""
	I0520 12:57:57.578511  610501 logs.go:276] 1 containers: [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e]
	I0520 12:57:57.578571  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.582910  610501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 12:57:57.582980  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 12:57:57.622272  610501 cri.go:89] found id: "7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:57.622294  610501 cri.go:89] found id: ""
	I0520 12:57:57.622303  610501 logs.go:276] 1 containers: [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0]
	I0520 12:57:57.622353  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.626295  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 12:57:57.626351  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 12:57:57.671885  610501 cri.go:89] found id: "6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:57.671910  610501 cri.go:89] found id: ""
	I0520 12:57:57.671918  610501 logs.go:276] 1 containers: [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a]
	I0520 12:57:57.671970  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.676755  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 12:57:57.676827  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 12:57:57.713995  610501 cri.go:89] found id: "a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:57.714014  610501 cri.go:89] found id: ""
	I0520 12:57:57.714023  610501 logs.go:276] 1 containers: [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c]
	I0520 12:57:57.714084  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.718184  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 12:57:57.718247  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 12:57:57.755752  610501 cri.go:89] found id: "6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:57.755782  610501 cri.go:89] found id: ""
	I0520 12:57:57.755793  610501 logs.go:276] 1 containers: [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865]
	I0520 12:57:57.755845  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.759887  610501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 12:57:57.759953  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 12:57:57.796173  610501 cri.go:89] found id: ""
	I0520 12:57:57.796207  610501 logs.go:276] 0 containers: []
	W0520 12:57:57.796218  610501 logs.go:278] No container was found matching "kindnet"
	I0520 12:57:57.796230  610501 logs.go:123] Gathering logs for kube-scheduler [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a] ...
	I0520 12:57:57.796243  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:57.843540  610501 logs.go:123] Gathering logs for CRI-O ...
	I0520 12:57:57.843582  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 12:57:58.695225  610501 logs.go:123] Gathering logs for kube-proxy [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c] ...
	I0520 12:57:58.695278  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:58.734177  610501 logs.go:123] Gathering logs for kube-controller-manager [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865] ...
	I0520 12:57:58.734221  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:58.798029  610501 logs.go:123] Gathering logs for kubelet ...
	I0520 12:57:58.798075  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 12:57:58.879582  610501 logs.go:123] Gathering logs for dmesg ...
	I0520 12:57:58.879638  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 12:57:58.894417  610501 logs.go:123] Gathering logs for describe nodes ...
	I0520 12:57:58.894467  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 12:57:59.011252  610501 logs.go:123] Gathering logs for kube-apiserver [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34] ...
	I0520 12:57:59.011297  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:59.058509  610501 logs.go:123] Gathering logs for etcd [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e] ...
	I0520 12:57:59.058547  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:59.120006  610501 logs.go:123] Gathering logs for coredns [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0] ...
	I0520 12:57:59.120045  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:59.157503  610501 logs.go:123] Gathering logs for container status ...
	I0520 12:57:59.157537  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 12:58:01.712083  610501 system_pods.go:59] 18 kube-system pods found
	I0520 12:58:01.712116  610501 system_pods.go:61] "coredns-7db6d8ff4d-vp4g8" [b9838e64-b32b-489f-8944-3a29c87892a6] Running
	I0520 12:58:01.712121  610501 system_pods.go:61] "csi-hostpath-attacher-0" [382113f8-2f09-4b46-964e-9a898b8cde1a] Running
	I0520 12:58:01.712124  610501 system_pods.go:61] "csi-hostpath-resizer-0" [f25c7026-9336-4c3b-baa9-382b164e4060] Running
	I0520 12:58:01.712127  610501 system_pods.go:61] "csi-hostpathplugin-k4gtt" [1b5b12d1-1c43-4122-9b62-f05cc49ba29c] Running
	I0520 12:58:01.712130  610501 system_pods.go:61] "etcd-addons-840762" [7e4a944d-05a8-49fc-b415-b912821c0b95] Running
	I0520 12:58:01.712133  610501 system_pods.go:61] "kube-apiserver-addons-840762" [5d4315b9-e854-4790-a1ff-e2749c9a4986] Running
	I0520 12:58:01.712136  610501 system_pods.go:61] "kube-controller-manager-addons-840762" [113efbaf-3b1e-471f-99fa-700614bf583d] Running
	I0520 12:58:01.712138  610501 system_pods.go:61] "kube-ingress-dns-minikube" [c057ec77-ddf8-4ad7-9001-a7b4f48a2d00] Running
	I0520 12:58:01.712141  610501 system_pods.go:61] "kube-proxy-mpkr9" [d7a0dc50-43c6-4927-9c13-45e9104e2206] Running
	I0520 12:58:01.712144  610501 system_pods.go:61] "kube-scheduler-addons-840762" [f4f8cee3-7755-409f-86fc-c558934af287] Running
	I0520 12:58:01.712146  610501 system_pods.go:61] "metrics-server-c59844bb4-8g977" [2f766954-b3a4-4592-865f-b37297fefae7] Running
	I0520 12:58:01.712149  610501 system_pods.go:61] "nvidia-device-plugin-daemonset-w5d66" [88344eab-652a-4d9d-9f7f-171aa2936225] Running
	I0520 12:58:01.712152  610501 system_pods.go:61] "registry-jwvq5" [11f262c9-d0cf-456f-bfd1-fa66f364ffaf] Running
	I0520 12:58:01.712154  610501 system_pods.go:61] "registry-proxy-xpxjv" [ca35b86e-6424-40e0-a0d6-cbd41f0ccab0] Running
	I0520 12:58:01.712157  610501 system_pods.go:61] "snapshot-controller-745499f584-h6pwb" [09a87307-3db0-4409-a938-045a643b3019] Running
	I0520 12:58:01.712160  610501 system_pods.go:61] "snapshot-controller-745499f584-tskjh" [68e4661d-25a9-4ea9-aca7-01ab30e83701] Running
	I0520 12:58:01.712164  610501 system_pods.go:61] "storage-provisioner" [0af02429-e13b-4886-993d-0d7815e2fb69] Running
	I0520 12:58:01.712169  610501 system_pods.go:61] "tiller-deploy-6677d64bcd-9z85l" [a58791b3-4277-403d-9b31-4f938890905e] Running
	I0520 12:58:01.712174  610501 system_pods.go:74] duration metric: took 4.214955142s to wait for pod list to return data ...
	I0520 12:58:01.712182  610501 default_sa.go:34] waiting for default service account to be created ...
	I0520 12:58:01.714213  610501 default_sa.go:45] found service account: "default"
	I0520 12:58:01.714230  610501 default_sa.go:55] duration metric: took 2.042647ms for default service account to be created ...
	I0520 12:58:01.714236  610501 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 12:58:01.722252  610501 system_pods.go:86] 18 kube-system pods found
	I0520 12:58:01.722281  610501 system_pods.go:89] "coredns-7db6d8ff4d-vp4g8" [b9838e64-b32b-489f-8944-3a29c87892a6] Running
	I0520 12:58:01.722287  610501 system_pods.go:89] "csi-hostpath-attacher-0" [382113f8-2f09-4b46-964e-9a898b8cde1a] Running
	I0520 12:58:01.722291  610501 system_pods.go:89] "csi-hostpath-resizer-0" [f25c7026-9336-4c3b-baa9-382b164e4060] Running
	I0520 12:58:01.722296  610501 system_pods.go:89] "csi-hostpathplugin-k4gtt" [1b5b12d1-1c43-4122-9b62-f05cc49ba29c] Running
	I0520 12:58:01.722312  610501 system_pods.go:89] "etcd-addons-840762" [7e4a944d-05a8-49fc-b415-b912821c0b95] Running
	I0520 12:58:01.722317  610501 system_pods.go:89] "kube-apiserver-addons-840762" [5d4315b9-e854-4790-a1ff-e2749c9a4986] Running
	I0520 12:58:01.722321  610501 system_pods.go:89] "kube-controller-manager-addons-840762" [113efbaf-3b1e-471f-99fa-700614bf583d] Running
	I0520 12:58:01.722325  610501 system_pods.go:89] "kube-ingress-dns-minikube" [c057ec77-ddf8-4ad7-9001-a7b4f48a2d00] Running
	I0520 12:58:01.722329  610501 system_pods.go:89] "kube-proxy-mpkr9" [d7a0dc50-43c6-4927-9c13-45e9104e2206] Running
	I0520 12:58:01.722333  610501 system_pods.go:89] "kube-scheduler-addons-840762" [f4f8cee3-7755-409f-86fc-c558934af287] Running
	I0520 12:58:01.722340  610501 system_pods.go:89] "metrics-server-c59844bb4-8g977" [2f766954-b3a4-4592-865f-b37297fefae7] Running
	I0520 12:58:01.722344  610501 system_pods.go:89] "nvidia-device-plugin-daemonset-w5d66" [88344eab-652a-4d9d-9f7f-171aa2936225] Running
	I0520 12:58:01.722350  610501 system_pods.go:89] "registry-jwvq5" [11f262c9-d0cf-456f-bfd1-fa66f364ffaf] Running
	I0520 12:58:01.722354  610501 system_pods.go:89] "registry-proxy-xpxjv" [ca35b86e-6424-40e0-a0d6-cbd41f0ccab0] Running
	I0520 12:58:01.722360  610501 system_pods.go:89] "snapshot-controller-745499f584-h6pwb" [09a87307-3db0-4409-a938-045a643b3019] Running
	I0520 12:58:01.722364  610501 system_pods.go:89] "snapshot-controller-745499f584-tskjh" [68e4661d-25a9-4ea9-aca7-01ab30e83701] Running
	I0520 12:58:01.722370  610501 system_pods.go:89] "storage-provisioner" [0af02429-e13b-4886-993d-0d7815e2fb69] Running
	I0520 12:58:01.722376  610501 system_pods.go:89] "tiller-deploy-6677d64bcd-9z85l" [a58791b3-4277-403d-9b31-4f938890905e] Running
	I0520 12:58:01.722382  610501 system_pods.go:126] duration metric: took 8.141251ms to wait for k8s-apps to be running ...
	I0520 12:58:01.722391  610501 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 12:58:01.722435  610501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:58:01.736978  610501 system_svc.go:56] duration metric: took 14.575937ms WaitForService to wait for kubelet
	I0520 12:58:01.737014  610501 kubeadm.go:576] duration metric: took 2m18.815967987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:58:01.737035  610501 node_conditions.go:102] verifying NodePressure condition ...
	I0520 12:58:01.740116  610501 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 12:58:01.740145  610501 node_conditions.go:123] node cpu capacity is 2
	I0520 12:58:01.740159  610501 node_conditions.go:105] duration metric: took 3.120029ms to run NodePressure ...
	I0520 12:58:01.740172  610501 start.go:240] waiting for startup goroutines ...
	I0520 12:58:01.740179  610501 start.go:245] waiting for cluster config update ...
	I0520 12:58:01.740195  610501 start.go:254] writing updated cluster config ...
	I0520 12:58:01.740485  610501 ssh_runner.go:195] Run: rm -f paused
	I0520 12:58:01.793273  610501 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 12:58:01.796159  610501 out.go:177] * Done! kubectl is now configured to use "addons-840762" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 13:03:34 addons-840762 crio[679]: time="2024-05-20 13:03:34.980048451Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716210214979967343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584529,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7fd268ff-8297-4554-b24d-d99656eaefc9 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:03:34 addons-840762 crio[679]: time="2024-05-20 13:03:34.981288325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bcae5d2f-741e-4a05-a9ef-783839560736 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:03:34 addons-840762 crio[679]: time="2024-05-20 13:03:34.981360458Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bcae5d2f-741e-4a05-a9ef-783839560736 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:03:34 addons-840762 crio[679]: time="2024-05-20 13:03:34.981728347Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5354208a2e1bcd92efadccb990980fd456cfb30492977c10c47dfd0fbbbb6ed1,PodSandboxId:fe89ff446540b375ad9a8700890a0ee7abe5caea2c47f19686fac6f8d3d12784,Metadata:&ContainerMetadata{Name:gadget,Attempt:6,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a8279285b9649c62230b8c882ba99e644d3fe8922fb14b53692633322555df8,State:CONTAINER_EXITED,CreatedAt:1716210162001950137,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-4r2zg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 20112e09-b29e-4ddb-96ef-4d06088304a4,},Annotations:map[string]string{io.kubernetes.container.hash: ce96a3ac,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72dc52b7185c345946265aa837801bc59e53c72f43dbe5c4f0566cee5e561b9,PodSandboxId:dbca97dc90a2f36f8a4257173cfc780d00d43448b3c3df69e147c56e45cd6294,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716210072603263978,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-cfg4n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9200c5f6-46f7-480c-a3a2-d0d9fa3ca5b
c,},Annotations:map[string]string{io.kubernetes.container.hash: 18d692d,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0affd33442670b381ad0ac8af56bea187d55b8d77ecb6f702f331e6d7cd27a80,PodSandboxId:d99df72c564719e50a49cf57467abeb86c8818263fc64a94a0497147a84174e2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716209931120181872,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kube
rnetes.pod.uid: efadd51f-18e9-48cb-bc58-103881fd9263,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac930cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382f55ae91d16dc6ab148279bf397cd687355480ad2539082316aa7dd601ef94,PodSandboxId:fd3c7dc9776c3eeff41b31b93f2a49af94cd0716954b72ec1f86161902f9ceb9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716209919405672036,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernete
s.pod.name: headlamp-68456f997b-5k6z6,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: c7973bac-822b-4c44-a10c-65bcfdb5f17d,},Annotations:map[string]string{io.kubernetes.container.hash: 98021e3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135a96f190c99f5e260335c022f15f4c6f9bbea70184e9e733be8cff9027ea7c,PodSandboxId:6684887cb09aaf19d350757b7738b384a8240762fa9f58af2e34005a4a40b9b8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNN
ING,CreatedAt:1716209829223861781,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-cjjrn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6847135c-da26-4866-92c6-81b6e53be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: cef7ac2c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c,PodSandboxId:354aac86fd4a4665bf8afc7fe6bd167e24d314d8f0d13397a2aeebfe19cfd9df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716209801955365316,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-8g977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f766954-b3a4-4592-865f-b37297fefae7,},Annotations:map[string]string{io.kubernetes.container.hash: feff7df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5640739ae135dbb8daed59ce1e4f39a4905cdd7c978310c3867d5cc70d02a655,PodSandboxId:47557659a5b0aed467988a3d8f68654d6895db6c19735e9a9f4609c5d26f8688,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c113
73e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1716209799507008718,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-hgp7b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 98ccbc95-97f1-48f6-99a4-6c335bd4b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 21e85f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fcce271acb34244cd7f2a77899ca2646459cb60e24733226b032214322c3ba,PodSandboxId:b221456a6d2ca4745db530432ec94e569009cf09fd601a90163006e36d4d4d57,Metadata:&Container
Metadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ff6c6518681d96dab31ba95ce65298861a2e2e2d5b2afbb168e6da22563c13d,State:CONTAINER_RUNNING,CreatedAt:1716209775813947254,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6fcd4f6f98-tzksc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 14c3ddef-1fef-49b7-84cc-6d33520ba034,},Annotations:map[string]string{io.kubernetes.container.hash: 6f3fb5ec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:8e66ec7f2ae77116414c457358da2b30db464a9cef54c6d94abb3330231e0638,PodSandboxId:345c392f5452dbde2b2794b633b10f9f223744240388bd1a1382a7f432edbde9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209748574401075,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0af02429-e13b-4886-993d-0d7815e2fb69,},Annotations:map[string]string{io.kubernetes.container.hash: a6b6c5f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0,PodSandboxId:79eaca6d020368574c107991ba037d147a5e6759ce595522b190b3208048e31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209745620204527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vp4g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9838e64-b32b-489f-8944-3a29c87892a6,},Annotations:map[string]string{io.kubernetes.container.hash: 49e4769a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\"
:\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c,PodSandboxId:e6b145e6b7a46c742fc5409c92716b9ead3d1a007fe901777100e3e904d089f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209742892346346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpkr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a0dc50-43c6-4927-9c13-45e9104e2206,},Annotations:map[string]string{io.kubernetes.container.hash: e884271,io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e,PodSandboxId:a496785b5b5f53d70bf2eae6408569665774f695bf07f2aeec756a75513b9a74,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209724111327099,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc20fa6c7f57dfba2ef2611768216c5c,},Annotations:map[string]string{io.kubernetes.container.hash: e0715a78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a,PodSandboxId:31de9fbe23d9ba41772d69015e4518cb70c8cdc7386fbac20bac4b64809284c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209724041565547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4bb06d2b47c119024d856c02f66b4d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865,PodSandboxId:ef74fd5cfc67f15c584b795a04fe940f1036e1a37553be890a44cabfd410c9f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209724027576770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb210768643f1d2a3f5e71d39e6100ee,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34,PodSandboxId:d591c03b18dc1366ee18164ecc1c47eeaa8739d08ea95b335348aa29b028f291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209723992074280,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25de9c545ad63a4181a22d9d16ed13c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1f6497a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bcae5d2f-741e-4a05-a9ef-783839560736 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.015752440Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c350ae43-e91e-4551-a89d-53d7f442cc82 name=/runtime.v1.RuntimeService/Version
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.015826468Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c350ae43-e91e-4551-a89d-53d7f442cc82 name=/runtime.v1.RuntimeService/Version
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.017095088Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=742f0e5c-a28e-4941-9ae3-f2180276de9e name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.018490891Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716210215018463716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584529,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=742f0e5c-a28e-4941-9ae3-f2180276de9e name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.019025255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=322c8798-1ab8-4f3d-8fe9-d0f37052e421 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.019094130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=322c8798-1ab8-4f3d-8fe9-d0f37052e421 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.019450850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5354208a2e1bcd92efadccb990980fd456cfb30492977c10c47dfd0fbbbb6ed1,PodSandboxId:fe89ff446540b375ad9a8700890a0ee7abe5caea2c47f19686fac6f8d3d12784,Metadata:&ContainerMetadata{Name:gadget,Attempt:6,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a8279285b9649c62230b8c882ba99e644d3fe8922fb14b53692633322555df8,State:CONTAINER_EXITED,CreatedAt:1716210162001950137,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-4r2zg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 20112e09-b29e-4ddb-96ef-4d06088304a4,},Annotations:map[string]string{io.kubernetes.container.hash: ce96a3ac,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72dc52b7185c345946265aa837801bc59e53c72f43dbe5c4f0566cee5e561b9,PodSandboxId:dbca97dc90a2f36f8a4257173cfc780d00d43448b3c3df69e147c56e45cd6294,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716210072603263978,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-cfg4n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9200c5f6-46f7-480c-a3a2-d0d9fa3ca5b
c,},Annotations:map[string]string{io.kubernetes.container.hash: 18d692d,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0affd33442670b381ad0ac8af56bea187d55b8d77ecb6f702f331e6d7cd27a80,PodSandboxId:d99df72c564719e50a49cf57467abeb86c8818263fc64a94a0497147a84174e2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716209931120181872,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kube
rnetes.pod.uid: efadd51f-18e9-48cb-bc58-103881fd9263,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac930cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382f55ae91d16dc6ab148279bf397cd687355480ad2539082316aa7dd601ef94,PodSandboxId:fd3c7dc9776c3eeff41b31b93f2a49af94cd0716954b72ec1f86161902f9ceb9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716209919405672036,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernete
s.pod.name: headlamp-68456f997b-5k6z6,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: c7973bac-822b-4c44-a10c-65bcfdb5f17d,},Annotations:map[string]string{io.kubernetes.container.hash: 98021e3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135a96f190c99f5e260335c022f15f4c6f9bbea70184e9e733be8cff9027ea7c,PodSandboxId:6684887cb09aaf19d350757b7738b384a8240762fa9f58af2e34005a4a40b9b8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNN
ING,CreatedAt:1716209829223861781,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-cjjrn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6847135c-da26-4866-92c6-81b6e53be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: cef7ac2c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c,PodSandboxId:354aac86fd4a4665bf8afc7fe6bd167e24d314d8f0d13397a2aeebfe19cfd9df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716209801955365316,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-8g977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f766954-b3a4-4592-865f-b37297fefae7,},Annotations:map[string]string{io.kubernetes.container.hash: feff7df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5640739ae135dbb8daed59ce1e4f39a4905cdd7c978310c3867d5cc70d02a655,PodSandboxId:47557659a5b0aed467988a3d8f68654d6895db6c19735e9a9f4609c5d26f8688,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c113
73e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1716209799507008718,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-hgp7b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 98ccbc95-97f1-48f6-99a4-6c335bd4b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 21e85f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fcce271acb34244cd7f2a77899ca2646459cb60e24733226b032214322c3ba,PodSandboxId:b221456a6d2ca4745db530432ec94e569009cf09fd601a90163006e36d4d4d57,Metadata:&Container
Metadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ff6c6518681d96dab31ba95ce65298861a2e2e2d5b2afbb168e6da22563c13d,State:CONTAINER_RUNNING,CreatedAt:1716209775813947254,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6fcd4f6f98-tzksc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 14c3ddef-1fef-49b7-84cc-6d33520ba034,},Annotations:map[string]string{io.kubernetes.container.hash: 6f3fb5ec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:8e66ec7f2ae77116414c457358da2b30db464a9cef54c6d94abb3330231e0638,PodSandboxId:345c392f5452dbde2b2794b633b10f9f223744240388bd1a1382a7f432edbde9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209748574401075,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0af02429-e13b-4886-993d-0d7815e2fb69,},Annotations:map[string]string{io.kubernetes.container.hash: a6b6c5f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0,PodSandboxId:79eaca6d020368574c107991ba037d147a5e6759ce595522b190b3208048e31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209745620204527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vp4g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9838e64-b32b-489f-8944-3a29c87892a6,},Annotations:map[string]string{io.kubernetes.container.hash: 49e4769a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\"
:\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c,PodSandboxId:e6b145e6b7a46c742fc5409c92716b9ead3d1a007fe901777100e3e904d089f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209742892346346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpkr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a0dc50-43c6-4927-9c13-45e9104e2206,},Annotations:map[string]string{io.kubernetes.container.hash: e884271,io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e,PodSandboxId:a496785b5b5f53d70bf2eae6408569665774f695bf07f2aeec756a75513b9a74,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209724111327099,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc20fa6c7f57dfba2ef2611768216c5c,},Annotations:map[string]string{io.kubernetes.container.hash: e0715a78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a,PodSandboxId:31de9fbe23d9ba41772d69015e4518cb70c8cdc7386fbac20bac4b64809284c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209724041565547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4bb06d2b47c119024d856c02f66b4d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865,PodSandboxId:ef74fd5cfc67f15c584b795a04fe940f1036e1a37553be890a44cabfd410c9f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209724027576770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb210768643f1d2a3f5e71d39e6100ee,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34,PodSandboxId:d591c03b18dc1366ee18164ecc1c47eeaa8739d08ea95b335348aa29b028f291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209723992074280,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25de9c545ad63a4181a22d9d16ed13c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1f6497a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=322c8798-1ab8-4f3d-8fe9-d0f37052e421 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.057545111Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fa66134-28e2-43b5-90d9-748f3e7e2434 name=/runtime.v1.RuntimeService/Version
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.057634354Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fa66134-28e2-43b5-90d9-748f3e7e2434 name=/runtime.v1.RuntimeService/Version
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.058970533Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3d1fb32-d5aa-4858-8dc0-6b8e35cdc95c name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.060461543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716210215060429776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584529,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3d1fb32-d5aa-4858-8dc0-6b8e35cdc95c name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.061516694Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f29c706d-95e1-4066-95eb-b93332c12634 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.061596047Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f29c706d-95e1-4066-95eb-b93332c12634 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.061930921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5354208a2e1bcd92efadccb990980fd456cfb30492977c10c47dfd0fbbbb6ed1,PodSandboxId:fe89ff446540b375ad9a8700890a0ee7abe5caea2c47f19686fac6f8d3d12784,Metadata:&ContainerMetadata{Name:gadget,Attempt:6,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a8279285b9649c62230b8c882ba99e644d3fe8922fb14b53692633322555df8,State:CONTAINER_EXITED,CreatedAt:1716210162001950137,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-4r2zg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 20112e09-b29e-4ddb-96ef-4d06088304a4,},Annotations:map[string]string{io.kubernetes.container.hash: ce96a3ac,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72dc52b7185c345946265aa837801bc59e53c72f43dbe5c4f0566cee5e561b9,PodSandboxId:dbca97dc90a2f36f8a4257173cfc780d00d43448b3c3df69e147c56e45cd6294,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716210072603263978,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-cfg4n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9200c5f6-46f7-480c-a3a2-d0d9fa3ca5b
c,},Annotations:map[string]string{io.kubernetes.container.hash: 18d692d,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0affd33442670b381ad0ac8af56bea187d55b8d77ecb6f702f331e6d7cd27a80,PodSandboxId:d99df72c564719e50a49cf57467abeb86c8818263fc64a94a0497147a84174e2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716209931120181872,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kube
rnetes.pod.uid: efadd51f-18e9-48cb-bc58-103881fd9263,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac930cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382f55ae91d16dc6ab148279bf397cd687355480ad2539082316aa7dd601ef94,PodSandboxId:fd3c7dc9776c3eeff41b31b93f2a49af94cd0716954b72ec1f86161902f9ceb9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716209919405672036,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernete
s.pod.name: headlamp-68456f997b-5k6z6,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: c7973bac-822b-4c44-a10c-65bcfdb5f17d,},Annotations:map[string]string{io.kubernetes.container.hash: 98021e3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135a96f190c99f5e260335c022f15f4c6f9bbea70184e9e733be8cff9027ea7c,PodSandboxId:6684887cb09aaf19d350757b7738b384a8240762fa9f58af2e34005a4a40b9b8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNN
ING,CreatedAt:1716209829223861781,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-cjjrn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6847135c-da26-4866-92c6-81b6e53be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: cef7ac2c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c,PodSandboxId:354aac86fd4a4665bf8afc7fe6bd167e24d314d8f0d13397a2aeebfe19cfd9df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716209801955365316,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-8g977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f766954-b3a4-4592-865f-b37297fefae7,},Annotations:map[string]string{io.kubernetes.container.hash: feff7df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5640739ae135dbb8daed59ce1e4f39a4905cdd7c978310c3867d5cc70d02a655,PodSandboxId:47557659a5b0aed467988a3d8f68654d6895db6c19735e9a9f4609c5d26f8688,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c113
73e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1716209799507008718,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-hgp7b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 98ccbc95-97f1-48f6-99a4-6c335bd4b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 21e85f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fcce271acb34244cd7f2a77899ca2646459cb60e24733226b032214322c3ba,PodSandboxId:b221456a6d2ca4745db530432ec94e569009cf09fd601a90163006e36d4d4d57,Metadata:&Container
Metadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ff6c6518681d96dab31ba95ce65298861a2e2e2d5b2afbb168e6da22563c13d,State:CONTAINER_RUNNING,CreatedAt:1716209775813947254,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6fcd4f6f98-tzksc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 14c3ddef-1fef-49b7-84cc-6d33520ba034,},Annotations:map[string]string{io.kubernetes.container.hash: 6f3fb5ec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:8e66ec7f2ae77116414c457358da2b30db464a9cef54c6d94abb3330231e0638,PodSandboxId:345c392f5452dbde2b2794b633b10f9f223744240388bd1a1382a7f432edbde9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209748574401075,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0af02429-e13b-4886-993d-0d7815e2fb69,},Annotations:map[string]string{io.kubernetes.container.hash: a6b6c5f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0,PodSandboxId:79eaca6d020368574c107991ba037d147a5e6759ce595522b190b3208048e31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209745620204527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vp4g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9838e64-b32b-489f-8944-3a29c87892a6,},Annotations:map[string]string{io.kubernetes.container.hash: 49e4769a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\"
:\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c,PodSandboxId:e6b145e6b7a46c742fc5409c92716b9ead3d1a007fe901777100e3e904d089f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209742892346346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpkr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a0dc50-43c6-4927-9c13-45e9104e2206,},Annotations:map[string]string{io.kubernetes.container.hash: e884271,io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e,PodSandboxId:a496785b5b5f53d70bf2eae6408569665774f695bf07f2aeec756a75513b9a74,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209724111327099,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc20fa6c7f57dfba2ef2611768216c5c,},Annotations:map[string]string{io.kubernetes.container.hash: e0715a78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a,PodSandboxId:31de9fbe23d9ba41772d69015e4518cb70c8cdc7386fbac20bac4b64809284c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209724041565547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4bb06d2b47c119024d856c02f66b4d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865,PodSandboxId:ef74fd5cfc67f15c584b795a04fe940f1036e1a37553be890a44cabfd410c9f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209724027576770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb210768643f1d2a3f5e71d39e6100ee,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34,PodSandboxId:d591c03b18dc1366ee18164ecc1c47eeaa8739d08ea95b335348aa29b028f291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209723992074280,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25de9c545ad63a4181a22d9d16ed13c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1f6497a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f29c706d-95e1-4066-95eb-b93332c12634 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.076204086Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c.BUE1N2\"" file="server/server.go:805"
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.076271172Z" level=debug msg="Container or sandbox exited: 0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c.BUE1N2" file="server/server.go:810"
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.076215288Z" level=debug msg="Event: WRITE         \"/var/run/crio/exits/0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c.BUE1N2\"" file="server/server.go:805"
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.076233301Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c\"" file="server/server.go:805"
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.076335685Z" level=debug msg="Container or sandbox exited: 0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c" file="server/server.go:810"
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.076357024Z" level=debug msg="container exited and found: 0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c" file="server/server.go:825"
	May 20 13:03:35 addons-840762 crio[679]: time="2024-05-20 13:03:35.076238086Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c.BUE1N2\"" file="server/server.go:805"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5354208a2e1bc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a       53 seconds ago      Exited              gadget                    6                   fe89ff446540b       gadget-4r2zg
	d72dc52b7185c       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 2 minutes ago       Running             hello-world-app           0                   dbca97dc90a2f       hello-world-app-86c47465fc-cfg4n
	0affd33442670       docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00                         4 minutes ago       Running             nginx                     0                   d99df72c56471       nginx
	382f55ae91d16       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                   4 minutes ago       Running             headlamp                  0                   fd3c7dc9776c3       headlamp-68456f997b-5k6z6
	135a96f190c99       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            6 minutes ago       Running             gcp-auth                  0                   6684887cb09aa       gcp-auth-5db96cd9b4-cjjrn
	0e9db02ffacd4       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Exited              metrics-server            0                   354aac86fd4a4       metrics-server-c59844bb4-8g977
	5640739ae135d       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         6 minutes ago       Running             yakd                      0                   47557659a5b0a       yakd-dashboard-5ddbf7d777-hgp7b
	78fcce271acb3       gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4          7 minutes ago       Running             cloud-spanner-emulator    0                   b221456a6d2ca       cloud-spanner-emulator-6fcd4f6f98-tzksc
	8e66ec7f2ae77       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   345c392f5452d       storage-provisioner
	7059a82048d9c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   79eaca6d02036       coredns-7db6d8ff4d-vp4g8
	a0af7ffce7a12       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                        7 minutes ago       Running             kube-proxy                0                   e6b145e6b7a46       kube-proxy-mpkr9
	10c3d12060059       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   a496785b5b5f5       etcd-addons-840762
	6363b2ba4829a       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                        8 minutes ago       Running             kube-scheduler            0                   31de9fbe23d9b       kube-scheduler-addons-840762
	6cca9c1fefcd5       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                        8 minutes ago       Running             kube-controller-manager   0                   ef74fd5cfc67f       kube-controller-manager-addons-840762
	9b2ffe0b08efe       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                        8 minutes ago       Running             kube-apiserver            0                   d591c03b18dc1       kube-apiserver-addons-840762
	
	
	==> coredns [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0] <==
	[INFO] 10.244.0.7:36312 - 833 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000133942s
	[INFO] 10.244.0.7:38189 - 20558 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000140292s
	[INFO] 10.244.0.7:38189 - 49744 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00020756s
	[INFO] 10.244.0.7:40716 - 37403 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000209728s
	[INFO] 10.244.0.7:40716 - 54809 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000338691s
	[INFO] 10.244.0.7:34802 - 60141 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076847s
	[INFO] 10.244.0.7:34802 - 13548 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001884167s
	[INFO] 10.244.0.7:46201 - 18591 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000198s
	[INFO] 10.244.0.7:46201 - 17818 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000450713s
	[INFO] 10.244.0.7:44069 - 5855 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000169721s
	[INFO] 10.244.0.7:44069 - 43219 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081091s
	[INFO] 10.244.0.7:48623 - 843 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089929s
	[INFO] 10.244.0.7:48623 - 64597 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000185645s
	[INFO] 10.244.0.7:51149 - 3489 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000100909s
	[INFO] 10.244.0.7:51149 - 15454 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000071835s
	[INFO] 10.244.0.22:56551 - 48499 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000373719s
	[INFO] 10.244.0.22:40318 - 16711 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000108319s
	[INFO] 10.244.0.22:39466 - 14127 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116621s
	[INFO] 10.244.0.22:54206 - 13934 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000058539s
	[INFO] 10.244.0.22:56712 - 54214 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114656s
	[INFO] 10.244.0.22:56107 - 36752 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091644s
	[INFO] 10.244.0.22:46924 - 25436 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001332083s
	[INFO] 10.244.0.22:54686 - 62944 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001589718s
	[INFO] 10.244.0.24:57177 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000620091s
	[INFO] 10.244.0.24:37965 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000088343s
	
	
	==> describe nodes <==
	Name:               addons-840762
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-840762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=addons-840762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T12_55_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-840762
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:55:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-840762
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:03:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:01:37 +0000   Mon, 20 May 2024 12:55:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:01:37 +0000   Mon, 20 May 2024 12:55:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:01:37 +0000   Mon, 20 May 2024 12:55:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:01:37 +0000   Mon, 20 May 2024 12:55:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    addons-840762
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 0bc07a572c69424e8b07c61391a8d459
	  System UUID:                0bc07a57-2c69-424e-8b07-c61391a8d459
	  Boot ID:                    1b84f601-3379-4074-9d98-222bacd601d5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-tzksc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  default                     hello-world-app-86c47465fc-cfg4n           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  gadget                      gadget-4r2zg                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m46s
	  gcp-auth                    gcp-auth-5db96cd9b4-cjjrn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m41s
	  headlamp                    headlamp-68456f997b-5k6z6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 coredns-7db6d8ff4d-vp4g8                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m53s
	  kube-system                 etcd-addons-840762                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m6s
	  kube-system                 kube-apiserver-addons-840762               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  kube-system                 kube-controller-manager-addons-840762      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m6s
	  kube-system                 kube-proxy-mpkr9                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m54s
	  kube-system                 kube-scheduler-addons-840762               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m8s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m48s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-hgp7b            0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     7m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m12s (x8 over 8m13s)  kubelet          Node addons-840762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m12s (x8 over 8m13s)  kubelet          Node addons-840762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m12s (x7 over 8m13s)  kubelet          Node addons-840762 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m6s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m6s                   kubelet          Node addons-840762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m6s                   kubelet          Node addons-840762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m6s                   kubelet          Node addons-840762 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m5s                   kubelet          Node addons-840762 status is now: NodeReady
	  Normal  RegisteredNode           7m54s                  node-controller  Node addons-840762 event: Registered Node addons-840762 in Controller
	
	
	==> dmesg <==
	[  +5.571836] kauditd_printk_skb: 66 callbacks suppressed
	[May20 12:56] kauditd_printk_skb: 29 callbacks suppressed
	[ +12.326752] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.228650] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.092557] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.597356] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.620848] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.044087] kauditd_printk_skb: 61 callbacks suppressed
	[May20 12:57] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.406626] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.331220] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.843229] kauditd_printk_skb: 37 callbacks suppressed
	[May20 12:58] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.616141] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.048897] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.235566] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.100710] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.393916] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.691140] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.455540] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.268416] kauditd_printk_skb: 4 callbacks suppressed
	[May20 12:59] kauditd_printk_skb: 15 callbacks suppressed
	[May20 13:01] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.742137] kauditd_printk_skb: 21 callbacks suppressed
	[May20 13:03] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e] <==
	{"level":"info","ts":"2024-05-20T12:57:02.321521Z","caller":"traceutil/trace.go:171","msg":"trace[2119614539] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1143; }","duration":"213.7996ms","start":"2024-05-20T12:57:02.107675Z","end":"2024-05-20T12:57:02.321474Z","steps":["trace[2119614539] 'agreement among raft nodes before linearized reading'  (duration: 212.506688ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:57:02.320376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.90392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-8g977\" ","response":"range_response_count:1 size:4456"}
	{"level":"info","ts":"2024-05-20T12:57:02.322075Z","caller":"traceutil/trace.go:171","msg":"trace[402781208] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-8g977; range_end:; response_count:1; response_revision:1143; }","duration":"229.620558ms","start":"2024-05-20T12:57:02.092443Z","end":"2024-05-20T12:57:02.322064Z","steps":["trace[402781208] 'agreement among raft nodes before linearized reading'  (duration: 227.894066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:57:02.320427Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.989566ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-05-20T12:57:02.322823Z","caller":"traceutil/trace.go:171","msg":"trace[1202776357] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1143; }","duration":"276.407737ms","start":"2024-05-20T12:57:02.046401Z","end":"2024-05-20T12:57:02.322809Z","steps":["trace[1202776357] 'agreement among raft nodes before linearized reading'  (duration: 273.988404ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:57:09.105325Z","caller":"traceutil/trace.go:171","msg":"trace[1398827240] transaction","detail":"{read_only:false; response_revision:1175; number_of_response:1; }","duration":"316.084124ms","start":"2024-05-20T12:57:08.789227Z","end":"2024-05-20T12:57:09.105311Z","steps":["trace[1398827240] 'process raft request'  (duration: 315.952768ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:57:09.105525Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:57:08.789209Z","time spent":"316.215355ms","remote":"127.0.0.1:53244","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-q7ycbuju4r3pza3uylsengkn3e\" mod_revision:1128 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-q7ycbuju4r3pza3uylsengkn3e\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-q7ycbuju4r3pza3uylsengkn3e\" > >"}
	{"level":"info","ts":"2024-05-20T12:57:09.106442Z","caller":"traceutil/trace.go:171","msg":"trace[326326687] linearizableReadLoop","detail":"{readStateIndex:1213; appliedIndex:1213; }","duration":"209.919064ms","start":"2024-05-20T12:57:08.89651Z","end":"2024-05-20T12:57:09.106429Z","steps":["trace[326326687] 'read index received'  (duration: 209.914275ms)","trace[326326687] 'applied index is now lower than readState.Index'  (duration: 4.136µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T12:57:09.106829Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.885262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11447"}
	{"level":"info","ts":"2024-05-20T12:57:09.106896Z","caller":"traceutil/trace.go:171","msg":"trace[1504113938] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1175; }","duration":"144.001652ms","start":"2024-05-20T12:57:08.962878Z","end":"2024-05-20T12:57:09.10688Z","steps":["trace[1504113938] 'agreement among raft nodes before linearized reading'  (duration: 143.810281ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:57:09.107098Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.589083ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-8g977.17d133b933159048\" ","response":"range_response_count:1 size:813"}
	{"level":"info","ts":"2024-05-20T12:57:09.107231Z","caller":"traceutil/trace.go:171","msg":"trace[2064512433] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-c59844bb4-8g977.17d133b933159048; range_end:; response_count:1; response_revision:1175; }","duration":"210.737054ms","start":"2024-05-20T12:57:08.896486Z","end":"2024-05-20T12:57:09.107223Z","steps":["trace[2064512433] 'agreement among raft nodes before linearized reading'  (duration: 210.56075ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:57:45.751353Z","caller":"traceutil/trace.go:171","msg":"trace[882681731] linearizableReadLoop","detail":"{readStateIndex:1334; appliedIndex:1333; }","duration":"159.848303ms","start":"2024-05-20T12:57:45.591489Z","end":"2024-05-20T12:57:45.751337Z","steps":["trace[882681731] 'read index received'  (duration: 159.544489ms)","trace[882681731] 'applied index is now lower than readState.Index'  (duration: 303.24µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T12:57:45.751582Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.05588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-8g977\" ","response":"range_response_count:1 size:4456"}
	{"level":"info","ts":"2024-05-20T12:57:45.751624Z","caller":"traceutil/trace.go:171","msg":"trace[1190867087] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-8g977; range_end:; response_count:1; response_revision:1286; }","duration":"160.147841ms","start":"2024-05-20T12:57:45.591463Z","end":"2024-05-20T12:57:45.751611Z","steps":["trace[1190867087] 'agreement among raft nodes before linearized reading'  (duration: 159.984942ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:57:45.751907Z","caller":"traceutil/trace.go:171","msg":"trace[346556204] transaction","detail":"{read_only:false; response_revision:1286; number_of_response:1; }","duration":"270.337258ms","start":"2024-05-20T12:57:45.481561Z","end":"2024-05-20T12:57:45.751899Z","steps":["trace[346556204] 'process raft request'  (duration: 269.51066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:58:18.70607Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:58:18.237789Z","time spent":"468.269671ms","remote":"127.0.0.1:52996","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-05-20T12:58:18.706201Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"329.609863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:58:18.70625Z","caller":"traceutil/trace.go:171","msg":"trace[779024922] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1361; }","duration":"329.774357ms","start":"2024-05-20T12:58:18.376465Z","end":"2024-05-20T12:58:18.706239Z","steps":["trace[779024922] 'agreement among raft nodes before linearized reading'  (duration: 329.619372ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:58:18.706322Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:58:18.376449Z","time spent":"329.864586ms","remote":"127.0.0.1:52954","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-05-20T12:58:18.70606Z","caller":"traceutil/trace.go:171","msg":"trace[1211898191] linearizableReadLoop","detail":"{readStateIndex:1416; appliedIndex:1415; }","duration":"329.524623ms","start":"2024-05-20T12:58:18.3765Z","end":"2024-05-20T12:58:18.706024Z","steps":["trace[1211898191] 'read index received'  (duration: 329.28852ms)","trace[1211898191] 'applied index is now lower than readState.Index'  (duration: 235.026µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T12:58:18.706606Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.994414ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85719"}
	{"level":"info","ts":"2024-05-20T12:58:18.706632Z","caller":"traceutil/trace.go:171","msg":"trace[1149837529] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1361; }","duration":"145.072721ms","start":"2024-05-20T12:58:18.561551Z","end":"2024-05-20T12:58:18.706624Z","steps":["trace[1149837529] 'agreement among raft nodes before linearized reading'  (duration: 144.887626ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:58:37.684314Z","caller":"traceutil/trace.go:171","msg":"trace[809617530] transaction","detail":"{read_only:false; response_revision:1555; number_of_response:1; }","duration":"138.77512ms","start":"2024-05-20T12:58:37.545506Z","end":"2024-05-20T12:58:37.684281Z","steps":["trace[809617530] 'process raft request'  (duration: 138.684656ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:59:26.543828Z","caller":"traceutil/trace.go:171","msg":"trace[1911464398] transaction","detail":"{read_only:false; response_revision:1806; number_of_response:1; }","duration":"109.341933ms","start":"2024-05-20T12:59:26.434453Z","end":"2024-05-20T12:59:26.543795Z","steps":["trace[1911464398] 'process raft request'  (duration: 109.039965ms)"],"step_count":1}
	
	
	==> gcp-auth [135a96f190c99f5e260335c022f15f4c6f9bbea70184e9e733be8cff9027ea7c] <==
	2024/05/20 12:57:09 GCP Auth Webhook started!
	2024/05/20 12:58:12 Ready to marshal response ...
	2024/05/20 12:58:12 Ready to write response ...
	2024/05/20 12:58:12 Ready to marshal response ...
	2024/05/20 12:58:12 Ready to write response ...
	2024/05/20 12:58:19 Ready to marshal response ...
	2024/05/20 12:58:19 Ready to write response ...
	2024/05/20 12:58:19 Ready to marshal response ...
	2024/05/20 12:58:19 Ready to write response ...
	2024/05/20 12:58:32 Ready to marshal response ...
	2024/05/20 12:58:32 Ready to write response ...
	2024/05/20 12:58:32 Ready to marshal response ...
	2024/05/20 12:58:32 Ready to write response ...
	2024/05/20 12:58:33 Ready to marshal response ...
	2024/05/20 12:58:33 Ready to write response ...
	2024/05/20 12:58:33 Ready to marshal response ...
	2024/05/20 12:58:33 Ready to write response ...
	2024/05/20 12:58:33 Ready to marshal response ...
	2024/05/20 12:58:33 Ready to write response ...
	2024/05/20 12:58:46 Ready to marshal response ...
	2024/05/20 12:58:46 Ready to write response ...
	2024/05/20 12:58:53 Ready to marshal response ...
	2024/05/20 12:58:53 Ready to write response ...
	2024/05/20 13:01:09 Ready to marshal response ...
	2024/05/20 13:01:09 Ready to write response ...
	
	
	==> kernel <==
	 13:03:35 up 8 min,  0 users,  load average: 0.30, 0.68, 0.51
	Linux addons-840762 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34] <==
	E0520 12:57:48.934987       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0520 12:57:48.934742       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.14.240:443: connect: connection refused
	E0520 12:57:48.937295       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.14.240:443: connect: connection refused
	E0520 12:57:48.944679       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.14.240:443: connect: connection refused
	I0520 12:57:49.061190       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0520 12:58:27.221268       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0520 12:58:33.015819       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.209.58"}
	I0520 12:58:46.386477       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0520 12:58:46.570418       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.163.48"}
	I0520 12:58:48.529971       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 12:58:48.530022       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 12:58:48.555537       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 12:58:48.555695       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 12:58:48.587008       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 12:58:48.587056       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 12:58:48.602159       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 12:58:48.602204       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 12:58:48.604193       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 12:58:48.604228       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0520 12:58:49.587843       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0520 12:58:49.605168       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E0520 12:58:49.629588       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	W0520 12:58:49.633268       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0520 13:01:09.166329       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.226.165"}
	
	
	==> kube-controller-manager [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865] <==
	I0520 13:01:11.848055       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0520 13:01:12.820272       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="11.684516ms"
	I0520 13:01:12.821041       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="26.441µs"
	W0520 13:01:19.344715       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 13:01:19.344892       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0520 13:01:21.864491       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0520 13:01:29.270378       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 13:01:29.270434       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 13:01:58.827532       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 13:01:58.827751       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 13:02:14.068528       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 13:02:14.068696       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 13:02:17.724297       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 13:02:17.724348       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 13:02:35.256394       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 13:02:35.256444       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 13:02:48.638533       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 13:02:48.638823       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 13:03:03.749100       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 13:03:03.749189       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 13:03:22.058028       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 13:03:22.058248       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 13:03:25.244034       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 13:03:25.244086       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0520 13:03:33.941985       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="10.056µs"
	
	
	==> kube-proxy [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c] <==
	I0520 12:55:43.546950       1 server_linux.go:69] "Using iptables proxy"
	I0520 12:55:43.566793       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.19"]
	I0520 12:55:43.676877       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 12:55:43.676950       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 12:55:43.676967       1 server_linux.go:165] "Using iptables Proxier"
	I0520 12:55:43.680164       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 12:55:43.680354       1 server.go:872] "Version info" version="v1.30.1"
	I0520 12:55:43.680369       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:55:43.683205       1 config.go:192] "Starting service config controller"
	I0520 12:55:43.683236       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 12:55:43.683271       1 config.go:101] "Starting endpoint slice config controller"
	I0520 12:55:43.683275       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 12:55:43.683845       1 config.go:319] "Starting node config controller"
	I0520 12:55:43.683852       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 12:55:43.783393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 12:55:43.783421       1 shared_informer.go:320] Caches are synced for service config
	I0520 12:55:43.784288       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a] <==
	W0520 12:55:26.558819       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 12:55:26.558844       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 12:55:26.558898       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 12:55:26.558919       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 12:55:26.559021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 12:55:26.559083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 12:55:27.360805       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 12:55:27.360863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 12:55:27.410660       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 12:55:27.410724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 12:55:27.421620       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 12:55:27.421692       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 12:55:27.595975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 12:55:27.596024       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 12:55:27.615749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 12:55:27.615778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 12:55:27.672874       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 12:55:27.672999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 12:55:27.707891       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 12:55:27.707934       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 12:55:27.803616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 12:55:27.803709       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 12:55:27.813500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 12:55:27.813540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0520 12:55:30.530739       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 13:02:45 addons-840762 kubelet[1276]: I0520 13:02:45.226989    1276 scope.go:117] "RemoveContainer" containerID="5354208a2e1bcd92efadccb990980fd456cfb30492977c10c47dfd0fbbbb6ed1"
	May 20 13:02:45 addons-840762 kubelet[1276]: E0520 13:02:45.227437    1276 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-4r2zg_gadget(20112e09-b29e-4ddb-96ef-4d06088304a4)\"" pod="gadget/gadget-4r2zg" podUID="20112e09-b29e-4ddb-96ef-4d06088304a4"
	May 20 13:02:57 addons-840762 kubelet[1276]: I0520 13:02:57.390905    1276 scope.go:117] "RemoveContainer" containerID="5354208a2e1bcd92efadccb990980fd456cfb30492977c10c47dfd0fbbbb6ed1"
	May 20 13:02:57 addons-840762 kubelet[1276]: E0520 13:02:57.391322    1276 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-4r2zg_gadget(20112e09-b29e-4ddb-96ef-4d06088304a4)\"" pod="gadget/gadget-4r2zg" podUID="20112e09-b29e-4ddb-96ef-4d06088304a4"
	May 20 13:03:11 addons-840762 kubelet[1276]: I0520 13:03:11.392722    1276 scope.go:117] "RemoveContainer" containerID="5354208a2e1bcd92efadccb990980fd456cfb30492977c10c47dfd0fbbbb6ed1"
	May 20 13:03:11 addons-840762 kubelet[1276]: E0520 13:03:11.393014    1276 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-4r2zg_gadget(20112e09-b29e-4ddb-96ef-4d06088304a4)\"" pod="gadget/gadget-4r2zg" podUID="20112e09-b29e-4ddb-96ef-4d06088304a4"
	May 20 13:03:22 addons-840762 kubelet[1276]: I0520 13:03:22.391278    1276 scope.go:117] "RemoveContainer" containerID="5354208a2e1bcd92efadccb990980fd456cfb30492977c10c47dfd0fbbbb6ed1"
	May 20 13:03:22 addons-840762 kubelet[1276]: E0520 13:03:22.391712    1276 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-4r2zg_gadget(20112e09-b29e-4ddb-96ef-4d06088304a4)\"" pod="gadget/gadget-4r2zg" podUID="20112e09-b29e-4ddb-96ef-4d06088304a4"
	May 20 13:03:29 addons-840762 kubelet[1276]: E0520 13:03:29.422080    1276 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:03:29 addons-840762 kubelet[1276]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:03:29 addons-840762 kubelet[1276]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:03:29 addons-840762 kubelet[1276]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:03:29 addons-840762 kubelet[1276]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:03:33 addons-840762 kubelet[1276]: I0520 13:03:33.392285    1276 scope.go:117] "RemoveContainer" containerID="5354208a2e1bcd92efadccb990980fd456cfb30492977c10c47dfd0fbbbb6ed1"
	May 20 13:03:33 addons-840762 kubelet[1276]: E0520 13:03:33.392963    1276 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-4r2zg_gadget(20112e09-b29e-4ddb-96ef-4d06088304a4)\"" pod="gadget/gadget-4r2zg" podUID="20112e09-b29e-4ddb-96ef-4d06088304a4"
	May 20 13:03:35 addons-840762 kubelet[1276]: I0520 13:03:35.401808    1276 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2f766954-b3a4-4592-865f-b37297fefae7-tmp-dir\") pod \"2f766954-b3a4-4592-865f-b37297fefae7\" (UID: \"2f766954-b3a4-4592-865f-b37297fefae7\") "
	May 20 13:03:35 addons-840762 kubelet[1276]: I0520 13:03:35.401863    1276 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhn5m\" (UniqueName: \"kubernetes.io/projected/2f766954-b3a4-4592-865f-b37297fefae7-kube-api-access-qhn5m\") pod \"2f766954-b3a4-4592-865f-b37297fefae7\" (UID: \"2f766954-b3a4-4592-865f-b37297fefae7\") "
	May 20 13:03:35 addons-840762 kubelet[1276]: I0520 13:03:35.402515    1276 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f766954-b3a4-4592-865f-b37297fefae7-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "2f766954-b3a4-4592-865f-b37297fefae7" (UID: "2f766954-b3a4-4592-865f-b37297fefae7"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	May 20 13:03:35 addons-840762 kubelet[1276]: I0520 13:03:35.405771    1276 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f766954-b3a4-4592-865f-b37297fefae7-kube-api-access-qhn5m" (OuterVolumeSpecName: "kube-api-access-qhn5m") pod "2f766954-b3a4-4592-865f-b37297fefae7" (UID: "2f766954-b3a4-4592-865f-b37297fefae7"). InnerVolumeSpecName "kube-api-access-qhn5m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 20 13:03:35 addons-840762 kubelet[1276]: I0520 13:03:35.447964    1276 scope.go:117] "RemoveContainer" containerID="0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c"
	May 20 13:03:35 addons-840762 kubelet[1276]: I0520 13:03:35.490024    1276 scope.go:117] "RemoveContainer" containerID="0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c"
	May 20 13:03:35 addons-840762 kubelet[1276]: E0520 13:03:35.490744    1276 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c\": container with ID starting with 0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c not found: ID does not exist" containerID="0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c"
	May 20 13:03:35 addons-840762 kubelet[1276]: I0520 13:03:35.490780    1276 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c"} err="failed to get container status \"0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c\": rpc error: code = NotFound desc = could not find container \"0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c\": container with ID starting with 0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c not found: ID does not exist"
	May 20 13:03:35 addons-840762 kubelet[1276]: I0520 13:03:35.502291    1276 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qhn5m\" (UniqueName: \"kubernetes.io/projected/2f766954-b3a4-4592-865f-b37297fefae7-kube-api-access-qhn5m\") on node \"addons-840762\" DevicePath \"\""
	May 20 13:03:35 addons-840762 kubelet[1276]: I0520 13:03:35.502311    1276 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2f766954-b3a4-4592-865f-b37297fefae7-tmp-dir\") on node \"addons-840762\" DevicePath \"\""
	
	
	==> storage-provisioner [8e66ec7f2ae77116414c457358da2b30db464a9cef54c6d94abb3330231e0638] <==
	I0520 12:55:49.365029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 12:55:49.426698       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 12:55:49.426754       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 12:55:49.559886       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 12:55:49.560949       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc1e5491-e0b6-4a74-9796-3c1c2ff6413c", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-840762_c6e61084-5a81-4fcf-a1aa-29d22ee4d88e became leader
	I0520 12:55:49.567383       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-840762_c6e61084-5a81-4fcf-a1aa-29d22ee4d88e!
	I0520 12:55:49.775880       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-840762_c6e61084-5a81-4fcf-a1aa-29d22ee4d88e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-840762 -n addons-840762
helpers_test.go:261: (dbg) Run:  kubectl --context addons-840762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (334.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-tzksc" [14c3ddef-1fef-49b7-84cc-6d33520ba034] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004312644s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-840762
addons_test.go:860: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-840762: exit status 11 (326.008245ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T12:58:29Z" level=error msg="stat /run/runc/a5c33b19087e35766b61a555cb6613b1ee826492c8f93e642cb23ffd675a60af: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:861: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 addons disable cloud-spanner -p addons-840762" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-840762 -n addons-840762
helpers_test.go:244: <<< TestAddons/parallel/CloudSpanner FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-840762 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-840762 logs -n 25: (1.535980715s)
helpers_test.go:252: TestAddons/parallel/CloudSpanner logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-562366 | jenkins | v1.33.1 | 20 May 24 12:54 UTC |                     |
	|         | -p download-only-562366              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| delete  | -p download-only-562366              | download-only-562366 | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| start   | -o=json --download-only              | download-only-600768 | jenkins | v1.33.1 | 20 May 24 12:54 UTC |                     |
	|         | -p download-only-600768              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| delete  | -p download-only-600768              | download-only-600768 | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| delete  | -p download-only-562366              | download-only-562366 | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| delete  | -p download-only-600768              | download-only-600768 | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| start   | --download-only -p                   | binary-mirror-910817 | jenkins | v1.33.1 | 20 May 24 12:54 UTC |                     |
	|         | binary-mirror-910817                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44813               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-910817              | binary-mirror-910817 | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| addons  | enable dashboard -p                  | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:54 UTC |                     |
	|         | addons-840762                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:54 UTC |                     |
	|         | addons-840762                        |                      |         |         |                     |                     |
	| start   | -p addons-840762 --wait=true         | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:58 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2          |                      |         |         |                     |                     |
	|         |  --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | addons-840762                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | -p addons-840762                     |                      |         |         |                     |                     |
	| ip      | addons-840762 ip                     | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	| addons  | addons-840762 addons disable         | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC | 20 May 24 12:58 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-840762        | jenkins | v1.33.1 | 20 May 24 12:58 UTC |                     |
	|         | addons-840762                        |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 12:54:50
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 12:54:50.749933  610501 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:54:50.750199  610501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:54:50.750209  610501 out.go:304] Setting ErrFile to fd 2...
	I0520 12:54:50.750213  610501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:54:50.750399  610501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 12:54:50.750992  610501 out.go:298] Setting JSON to false
	I0520 12:54:50.751872  610501 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9431,"bootTime":1716200260,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 12:54:50.751931  610501 start.go:139] virtualization: kvm guest
	I0520 12:54:50.754672  610501 out.go:177] * [addons-840762] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 12:54:50.756981  610501 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 12:54:50.759177  610501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 12:54:50.756934  610501 notify.go:220] Checking for updates...
	I0520 12:54:50.761478  610501 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 12:54:50.763622  610501 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 12:54:50.765719  610501 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 12:54:50.767722  610501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 12:54:50.769950  610501 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 12:54:50.803102  610501 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 12:54:50.805399  610501 start.go:297] selected driver: kvm2
	I0520 12:54:50.805434  610501 start.go:901] validating driver "kvm2" against <nil>
	I0520 12:54:50.805454  610501 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 12:54:50.806441  610501 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:54:50.806556  610501 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 12:54:50.822923  610501 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 12:54:50.822988  610501 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 12:54:50.823216  610501 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:54:50.823247  610501 cni.go:84] Creating CNI manager for ""
	I0520 12:54:50.823257  610501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 12:54:50.823270  610501 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 12:54:50.823335  610501 start.go:340] cluster config:
	{Name:addons-840762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-840762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:54:50.823464  610501 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:54:50.827202  610501 out.go:177] * Starting "addons-840762" primary control-plane node in "addons-840762" cluster
	I0520 12:54:50.829149  610501 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:54:50.829183  610501 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 12:54:50.829194  610501 cache.go:56] Caching tarball of preloaded images
	I0520 12:54:50.829274  610501 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 12:54:50.829286  610501 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 12:54:50.829591  610501 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/config.json ...
	I0520 12:54:50.829616  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/config.json: {Name:mk1bcc97b7c3196011ae8aa65e58032d87fa57bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:54:50.829771  610501 start.go:360] acquireMachinesLock for addons-840762: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 12:54:50.829815  610501 start.go:364] duration metric: took 31.227µs to acquireMachinesLock for "addons-840762"
	I0520 12:54:50.829832  610501 start.go:93] Provisioning new machine with config: &{Name:addons-840762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-840762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:54:50.829901  610501 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 12:54:50.832368  610501 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0520 12:54:50.832505  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:54:50.832552  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:54:50.847327  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I0520 12:54:50.847765  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:54:50.848420  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:54:50.848446  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:54:50.848806  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:54:50.849047  610501 main.go:141] libmachine: (addons-840762) Calling .GetMachineName
	I0520 12:54:50.849193  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:54:50.849375  610501 start.go:159] libmachine.API.Create for "addons-840762" (driver="kvm2")
	I0520 12:54:50.849403  610501 client.go:168] LocalClient.Create starting
	I0520 12:54:50.849451  610501 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem
	I0520 12:54:50.991473  610501 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem
	I0520 12:54:51.176622  610501 main.go:141] libmachine: Running pre-create checks...
	I0520 12:54:51.176652  610501 main.go:141] libmachine: (addons-840762) Calling .PreCreateCheck
	I0520 12:54:51.177212  610501 main.go:141] libmachine: (addons-840762) Calling .GetConfigRaw
	I0520 12:54:51.177703  610501 main.go:141] libmachine: Creating machine...
	I0520 12:54:51.177718  610501 main.go:141] libmachine: (addons-840762) Calling .Create
	I0520 12:54:51.177909  610501 main.go:141] libmachine: (addons-840762) Creating KVM machine...
	I0520 12:54:51.179266  610501 main.go:141] libmachine: (addons-840762) DBG | found existing default KVM network
	I0520 12:54:51.180081  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:51.179921  610539 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c10}
	I0520 12:54:51.180138  610501 main.go:141] libmachine: (addons-840762) DBG | created network xml: 
	I0520 12:54:51.180166  610501 main.go:141] libmachine: (addons-840762) DBG | <network>
	I0520 12:54:51.180178  610501 main.go:141] libmachine: (addons-840762) DBG |   <name>mk-addons-840762</name>
	I0520 12:54:51.180193  610501 main.go:141] libmachine: (addons-840762) DBG |   <dns enable='no'/>
	I0520 12:54:51.180204  610501 main.go:141] libmachine: (addons-840762) DBG |   
	I0520 12:54:51.180218  610501 main.go:141] libmachine: (addons-840762) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 12:54:51.180227  610501 main.go:141] libmachine: (addons-840762) DBG |     <dhcp>
	I0520 12:54:51.180235  610501 main.go:141] libmachine: (addons-840762) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 12:54:51.180247  610501 main.go:141] libmachine: (addons-840762) DBG |     </dhcp>
	I0520 12:54:51.180255  610501 main.go:141] libmachine: (addons-840762) DBG |   </ip>
	I0520 12:54:51.180318  610501 main.go:141] libmachine: (addons-840762) DBG |   
	I0520 12:54:51.180349  610501 main.go:141] libmachine: (addons-840762) DBG | </network>
	I0520 12:54:51.180368  610501 main.go:141] libmachine: (addons-840762) DBG | 
	I0520 12:54:51.186377  610501 main.go:141] libmachine: (addons-840762) DBG | trying to create private KVM network mk-addons-840762 192.168.39.0/24...
	I0520 12:54:51.253528  610501 main.go:141] libmachine: (addons-840762) DBG | private KVM network mk-addons-840762 192.168.39.0/24 created
	I0520 12:54:51.253564  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:51.253446  610539 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 12:54:51.253577  610501 main.go:141] libmachine: (addons-840762) Setting up store path in /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762 ...
	I0520 12:54:51.253591  610501 main.go:141] libmachine: (addons-840762) Building disk image from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 12:54:51.253664  610501 main.go:141] libmachine: (addons-840762) Downloading /home/jenkins/minikube-integration/18929-602525/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 12:54:51.515102  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:51.514941  610539 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa...
	I0520 12:54:51.762036  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:51.761845  610539 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/addons-840762.rawdisk...
	I0520 12:54:51.762086  610501 main.go:141] libmachine: (addons-840762) DBG | Writing magic tar header
	I0520 12:54:51.762101  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762 (perms=drwx------)
	I0520 12:54:51.762118  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines (perms=drwxr-xr-x)
	I0520 12:54:51.762125  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube (perms=drwxr-xr-x)
	I0520 12:54:51.762131  610501 main.go:141] libmachine: (addons-840762) DBG | Writing SSH key tar header
	I0520 12:54:51.762141  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:51.761967  610539 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762 ...
	I0520 12:54:51.762151  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525 (perms=drwxrwxr-x)
	I0520 12:54:51.762163  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762
	I0520 12:54:51.762179  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 12:54:51.762201  610501 main.go:141] libmachine: (addons-840762) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 12:54:51.762212  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines
	I0520 12:54:51.762223  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 12:54:51.762236  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525
	I0520 12:54:51.762248  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 12:54:51.762255  610501 main.go:141] libmachine: (addons-840762) Creating domain...
	I0520 12:54:51.762264  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home/jenkins
	I0520 12:54:51.762277  610501 main.go:141] libmachine: (addons-840762) DBG | Checking permissions on dir: /home
	I0520 12:54:51.762293  610501 main.go:141] libmachine: (addons-840762) DBG | Skipping /home - not owner
	I0520 12:54:51.763533  610501 main.go:141] libmachine: (addons-840762) define libvirt domain using xml: 
	I0520 12:54:51.763552  610501 main.go:141] libmachine: (addons-840762) <domain type='kvm'>
	I0520 12:54:51.763560  610501 main.go:141] libmachine: (addons-840762)   <name>addons-840762</name>
	I0520 12:54:51.763565  610501 main.go:141] libmachine: (addons-840762)   <memory unit='MiB'>4000</memory>
	I0520 12:54:51.763570  610501 main.go:141] libmachine: (addons-840762)   <vcpu>2</vcpu>
	I0520 12:54:51.763574  610501 main.go:141] libmachine: (addons-840762)   <features>
	I0520 12:54:51.763580  610501 main.go:141] libmachine: (addons-840762)     <acpi/>
	I0520 12:54:51.763586  610501 main.go:141] libmachine: (addons-840762)     <apic/>
	I0520 12:54:51.763593  610501 main.go:141] libmachine: (addons-840762)     <pae/>
	I0520 12:54:51.763604  610501 main.go:141] libmachine: (addons-840762)     
	I0520 12:54:51.763612  610501 main.go:141] libmachine: (addons-840762)   </features>
	I0520 12:54:51.763623  610501 main.go:141] libmachine: (addons-840762)   <cpu mode='host-passthrough'>
	I0520 12:54:51.763629  610501 main.go:141] libmachine: (addons-840762)   
	I0520 12:54:51.763646  610501 main.go:141] libmachine: (addons-840762)   </cpu>
	I0520 12:54:51.763655  610501 main.go:141] libmachine: (addons-840762)   <os>
	I0520 12:54:51.763660  610501 main.go:141] libmachine: (addons-840762)     <type>hvm</type>
	I0520 12:54:51.763665  610501 main.go:141] libmachine: (addons-840762)     <boot dev='cdrom'/>
	I0520 12:54:51.763669  610501 main.go:141] libmachine: (addons-840762)     <boot dev='hd'/>
	I0520 12:54:51.763678  610501 main.go:141] libmachine: (addons-840762)     <bootmenu enable='no'/>
	I0520 12:54:51.763688  610501 main.go:141] libmachine: (addons-840762)   </os>
	I0520 12:54:51.763701  610501 main.go:141] libmachine: (addons-840762)   <devices>
	I0520 12:54:51.763709  610501 main.go:141] libmachine: (addons-840762)     <disk type='file' device='cdrom'>
	I0520 12:54:51.763728  610501 main.go:141] libmachine: (addons-840762)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/boot2docker.iso'/>
	I0520 12:54:51.763746  610501 main.go:141] libmachine: (addons-840762)       <target dev='hdc' bus='scsi'/>
	I0520 12:54:51.763754  610501 main.go:141] libmachine: (addons-840762)       <readonly/>
	I0520 12:54:51.763758  610501 main.go:141] libmachine: (addons-840762)     </disk>
	I0520 12:54:51.763770  610501 main.go:141] libmachine: (addons-840762)     <disk type='file' device='disk'>
	I0520 12:54:51.763779  610501 main.go:141] libmachine: (addons-840762)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 12:54:51.763793  610501 main.go:141] libmachine: (addons-840762)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/addons-840762.rawdisk'/>
	I0520 12:54:51.763806  610501 main.go:141] libmachine: (addons-840762)       <target dev='hda' bus='virtio'/>
	I0520 12:54:51.763814  610501 main.go:141] libmachine: (addons-840762)     </disk>
	I0520 12:54:51.763826  610501 main.go:141] libmachine: (addons-840762)     <interface type='network'>
	I0520 12:54:51.763839  610501 main.go:141] libmachine: (addons-840762)       <source network='mk-addons-840762'/>
	I0520 12:54:51.763850  610501 main.go:141] libmachine: (addons-840762)       <model type='virtio'/>
	I0520 12:54:51.763859  610501 main.go:141] libmachine: (addons-840762)     </interface>
	I0520 12:54:51.763868  610501 main.go:141] libmachine: (addons-840762)     <interface type='network'>
	I0520 12:54:51.763874  610501 main.go:141] libmachine: (addons-840762)       <source network='default'/>
	I0520 12:54:51.763886  610501 main.go:141] libmachine: (addons-840762)       <model type='virtio'/>
	I0520 12:54:51.763898  610501 main.go:141] libmachine: (addons-840762)     </interface>
	I0520 12:54:51.763910  610501 main.go:141] libmachine: (addons-840762)     <serial type='pty'>
	I0520 12:54:51.763921  610501 main.go:141] libmachine: (addons-840762)       <target port='0'/>
	I0520 12:54:51.763931  610501 main.go:141] libmachine: (addons-840762)     </serial>
	I0520 12:54:51.763942  610501 main.go:141] libmachine: (addons-840762)     <console type='pty'>
	I0520 12:54:51.763953  610501 main.go:141] libmachine: (addons-840762)       <target type='serial' port='0'/>
	I0520 12:54:51.763964  610501 main.go:141] libmachine: (addons-840762)     </console>
	I0520 12:54:51.763972  610501 main.go:141] libmachine: (addons-840762)     <rng model='virtio'>
	I0520 12:54:51.763982  610501 main.go:141] libmachine: (addons-840762)       <backend model='random'>/dev/random</backend>
	I0520 12:54:51.763993  610501 main.go:141] libmachine: (addons-840762)     </rng>
	I0520 12:54:51.764002  610501 main.go:141] libmachine: (addons-840762)     
	I0520 12:54:51.764015  610501 main.go:141] libmachine: (addons-840762)     
	I0520 12:54:51.764028  610501 main.go:141] libmachine: (addons-840762)   </devices>
	I0520 12:54:51.764043  610501 main.go:141] libmachine: (addons-840762) </domain>
	I0520 12:54:51.764055  610501 main.go:141] libmachine: (addons-840762) 
	I0520 12:54:51.768989  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:fb:9f:32 in network default
	I0520 12:54:51.769612  610501 main.go:141] libmachine: (addons-840762) Ensuring networks are active...
	I0520 12:54:51.769643  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:51.770275  610501 main.go:141] libmachine: (addons-840762) Ensuring network default is active
	I0520 12:54:51.770537  610501 main.go:141] libmachine: (addons-840762) Ensuring network mk-addons-840762 is active
	I0520 12:54:51.770983  610501 main.go:141] libmachine: (addons-840762) Getting domain xml...
	I0520 12:54:51.771663  610501 main.go:141] libmachine: (addons-840762) Creating domain...
	I0520 12:54:52.966989  610501 main.go:141] libmachine: (addons-840762) Waiting to get IP...
	I0520 12:54:52.967844  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:52.968374  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:52.968400  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:52.968341  610539 retry.go:31] will retry after 245.330251ms: waiting for machine to come up
	I0520 12:54:53.215880  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:53.216390  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:53.216416  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:53.216352  610539 retry.go:31] will retry after 286.616472ms: waiting for machine to come up
	I0520 12:54:53.505129  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:53.505630  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:53.505658  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:53.505618  610539 retry.go:31] will retry after 312.787625ms: waiting for machine to come up
	I0520 12:54:53.820350  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:53.820828  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:53.820859  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:53.820772  610539 retry.go:31] will retry after 375.629067ms: waiting for machine to come up
	I0520 12:54:54.198230  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:54.198645  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:54.198678  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:54.198600  610539 retry.go:31] will retry after 558.50452ms: waiting for machine to come up
	I0520 12:54:54.758250  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:54.758836  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:54.758867  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:54.758777  610539 retry.go:31] will retry after 772.745392ms: waiting for machine to come up
	I0520 12:54:55.532754  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:55.533179  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:55.533205  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:55.533125  610539 retry.go:31] will retry after 1.015067234s: waiting for machine to come up
	I0520 12:54:56.549875  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:56.550336  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:56.550366  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:56.550270  610539 retry.go:31] will retry after 1.340438643s: waiting for machine to come up
	I0520 12:54:57.892757  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:57.893191  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:57.893226  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:57.893143  610539 retry.go:31] will retry after 1.779000898s: waiting for machine to come up
	I0520 12:54:59.674439  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:54:59.674849  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:54:59.674878  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:54:59.674795  610539 retry.go:31] will retry after 1.912219697s: waiting for machine to come up
	I0520 12:55:01.588719  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:01.589170  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:55:01.589211  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:55:01.589118  610539 retry.go:31] will retry after 2.779568547s: waiting for machine to come up
	I0520 12:55:04.372082  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:04.372519  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:55:04.372543  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:55:04.372481  610539 retry.go:31] will retry after 2.436821512s: waiting for machine to come up
	I0520 12:55:06.810430  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:06.810907  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find current IP address of domain addons-840762 in network mk-addons-840762
	I0520 12:55:06.810932  610501 main.go:141] libmachine: (addons-840762) DBG | I0520 12:55:06.810869  610539 retry.go:31] will retry after 4.499322165s: waiting for machine to come up
	I0520 12:55:11.311574  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.311986  610501 main.go:141] libmachine: (addons-840762) Found IP for machine: 192.168.39.19
	I0520 12:55:11.312007  610501 main.go:141] libmachine: (addons-840762) Reserving static IP address...
	I0520 12:55:11.312017  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has current primary IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.312416  610501 main.go:141] libmachine: (addons-840762) DBG | unable to find host DHCP lease matching {name: "addons-840762", mac: "52:54:00:0f:4e:d2", ip: "192.168.39.19"} in network mk-addons-840762
	I0520 12:55:11.448691  610501 main.go:141] libmachine: (addons-840762) DBG | Getting to WaitForSSH function...
	I0520 12:55:11.448724  610501 main.go:141] libmachine: (addons-840762) Reserved static IP address: 192.168.39.19
	I0520 12:55:11.448738  610501 main.go:141] libmachine: (addons-840762) Waiting for SSH to be available...
	I0520 12:55:11.451103  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.451496  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:11.451530  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.451644  610501 main.go:141] libmachine: (addons-840762) DBG | Using SSH client type: external
	I0520 12:55:11.451668  610501 main.go:141] libmachine: (addons-840762) DBG | Using SSH private key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa (-rw-------)
	I0520 12:55:11.451710  610501 main.go:141] libmachine: (addons-840762) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 12:55:11.451725  610501 main.go:141] libmachine: (addons-840762) DBG | About to run SSH command:
	I0520 12:55:11.451742  610501 main.go:141] libmachine: (addons-840762) DBG | exit 0
	I0520 12:55:11.581117  610501 main.go:141] libmachine: (addons-840762) DBG | SSH cmd err, output: <nil>: 
	I0520 12:55:11.581495  610501 main.go:141] libmachine: (addons-840762) KVM machine creation complete!
	I0520 12:55:11.581804  610501 main.go:141] libmachine: (addons-840762) Calling .GetConfigRaw
	I0520 12:55:11.616351  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:11.616704  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:11.616919  610501 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 12:55:11.616938  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:11.618424  610501 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 12:55:11.618443  610501 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 12:55:11.618453  610501 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 12:55:11.618462  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:11.620876  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.621298  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:11.621331  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.621539  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:11.621744  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.621950  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.622137  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:11.622327  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:11.622536  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:11.622550  610501 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 12:55:11.732457  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:55:11.732485  610501 main.go:141] libmachine: Detecting the provisioner...
	I0520 12:55:11.732494  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:11.736096  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.736526  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:11.736565  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.736781  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:11.737000  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.737207  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.737385  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:11.737562  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:11.737730  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:11.737740  610501 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 12:55:11.846191  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 12:55:11.846307  610501 main.go:141] libmachine: found compatible host: buildroot
	I0520 12:55:11.846320  610501 main.go:141] libmachine: Provisioning with buildroot...
	I0520 12:55:11.846331  610501 main.go:141] libmachine: (addons-840762) Calling .GetMachineName
	I0520 12:55:11.846646  610501 buildroot.go:166] provisioning hostname "addons-840762"
	I0520 12:55:11.846679  610501 main.go:141] libmachine: (addons-840762) Calling .GetMachineName
	I0520 12:55:11.846901  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:11.849576  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.850003  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:11.850032  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.850162  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:11.850370  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.850550  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.850706  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:11.850877  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:11.851054  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:11.851066  610501 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-840762 && echo "addons-840762" | sudo tee /etc/hostname
	I0520 12:55:11.976542  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-840762
	
	I0520 12:55:11.976570  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:11.979683  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.979984  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:11.980011  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:11.980169  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:11.980409  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.980578  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:11.980706  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:11.980890  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:11.981083  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:11.981099  610501 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-840762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-840762/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-840762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 12:55:12.102001  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:55:12.102048  610501 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 12:55:12.102072  610501 buildroot.go:174] setting up certificates
	I0520 12:55:12.102083  610501 provision.go:84] configureAuth start
	I0520 12:55:12.102092  610501 main.go:141] libmachine: (addons-840762) Calling .GetMachineName
	I0520 12:55:12.102454  610501 main.go:141] libmachine: (addons-840762) Calling .GetIP
	I0520 12:55:12.105413  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.105813  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.105841  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.106053  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.108107  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.108401  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.108434  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.108544  610501 provision.go:143] copyHostCerts
	I0520 12:55:12.108615  610501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 12:55:12.108744  610501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 12:55:12.108804  610501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 12:55:12.108851  610501 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.addons-840762 san=[127.0.0.1 192.168.39.19 addons-840762 localhost minikube]
	I0520 12:55:12.292779  610501 provision.go:177] copyRemoteCerts
	I0520 12:55:12.292840  610501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 12:55:12.292869  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.295591  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.295908  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.295936  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.296100  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.296359  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.296512  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.296659  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:12.382793  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 12:55:12.406307  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 12:55:12.428152  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 12:55:12.450174  610501 provision.go:87] duration metric: took 348.071182ms to configureAuth
	I0520 12:55:12.450217  610501 buildroot.go:189] setting minikube options for container-runtime
	I0520 12:55:12.450425  610501 config.go:182] Loaded profile config "addons-840762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:55:12.450508  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.453476  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.453934  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.453969  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.454114  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.454327  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.454542  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.454671  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.454839  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:12.455084  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:12.455101  610501 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 12:55:12.724253  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 12:55:12.724287  610501 main.go:141] libmachine: Checking connection to Docker...
	I0520 12:55:12.724297  610501 main.go:141] libmachine: (addons-840762) Calling .GetURL
	I0520 12:55:12.725626  610501 main.go:141] libmachine: (addons-840762) DBG | Using libvirt version 6000000
	I0520 12:55:12.728077  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.728460  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.728490  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.728650  610501 main.go:141] libmachine: Docker is up and running!
	I0520 12:55:12.728678  610501 main.go:141] libmachine: Reticulating splines...
	I0520 12:55:12.728688  610501 client.go:171] duration metric: took 21.879272392s to LocalClient.Create
	I0520 12:55:12.728716  610501 start.go:167] duration metric: took 21.879341856s to libmachine.API.Create "addons-840762"
	I0520 12:55:12.728725  610501 start.go:293] postStartSetup for "addons-840762" (driver="kvm2")
	I0520 12:55:12.728742  610501 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 12:55:12.728761  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:12.729013  610501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 12:55:12.729042  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.731260  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.731556  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.731576  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.731738  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.731952  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.732118  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.732284  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:12.815344  610501 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 12:55:12.819138  610501 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 12:55:12.819172  610501 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 12:55:12.819249  610501 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 12:55:12.819273  610501 start.go:296] duration metric: took 90.538988ms for postStartSetup
	I0520 12:55:12.819320  610501 main.go:141] libmachine: (addons-840762) Calling .GetConfigRaw
	I0520 12:55:12.819902  610501 main.go:141] libmachine: (addons-840762) Calling .GetIP
	I0520 12:55:12.822344  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.822666  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.822698  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.822886  610501 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/config.json ...
	I0520 12:55:12.823055  610501 start.go:128] duration metric: took 21.993143462s to createHost
	I0520 12:55:12.823077  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.825156  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.825572  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.825598  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.825816  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.826086  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.826305  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.826500  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.826715  610501 main.go:141] libmachine: Using SSH client type: native
	I0520 12:55:12.826884  610501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0520 12:55:12.826895  610501 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 12:55:12.937875  610501 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716209712.902821410
	
	I0520 12:55:12.937911  610501 fix.go:216] guest clock: 1716209712.902821410
	I0520 12:55:12.937923  610501 fix.go:229] Guest: 2024-05-20 12:55:12.90282141 +0000 UTC Remote: 2024-05-20 12:55:12.823066987 +0000 UTC m=+22.107122705 (delta=79.754423ms)
	I0520 12:55:12.937959  610501 fix.go:200] guest clock delta is within tolerance: 79.754423ms
	I0520 12:55:12.937968  610501 start.go:83] releasing machines lock for "addons-840762", held for 22.108141971s
	I0520 12:55:12.937999  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:12.938309  610501 main.go:141] libmachine: (addons-840762) Calling .GetIP
	I0520 12:55:12.941417  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.941810  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.941840  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.941966  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:12.942466  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:12.942664  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:12.942768  610501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 12:55:12.942823  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.942897  610501 ssh_runner.go:195] Run: cat /version.json
	I0520 12:55:12.942918  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:12.945235  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.945541  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.945560  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.945578  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.945756  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.945928  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.946081  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:12.946102  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.946103  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:12.946236  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:12.946316  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:12.946449  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:12.946595  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:12.946736  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	W0520 12:55:13.060984  610501 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 12:55:13.061095  610501 ssh_runner.go:195] Run: systemctl --version
	I0520 12:55:13.067028  610501 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 12:55:13.231228  610501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 12:55:13.237522  610501 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 12:55:13.237591  610501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 12:55:13.252624  610501 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 12:55:13.252647  610501 start.go:494] detecting cgroup driver to use...
	I0520 12:55:13.252707  610501 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 12:55:13.267587  610501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 12:55:13.282311  610501 docker.go:217] disabling cri-docker service (if available) ...
	I0520 12:55:13.282382  610501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 12:55:13.296303  610501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 12:55:13.309620  610501 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 12:55:13.423597  610501 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 12:55:13.589483  610501 docker.go:233] disabling docker service ...
	I0520 12:55:13.589574  610501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 12:55:13.603417  610501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 12:55:13.615738  610501 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 12:55:13.729481  610501 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 12:55:13.860853  610501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 12:55:13.873990  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 12:55:13.891599  610501 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 12:55:13.891677  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.901887  610501 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 12:55:13.901958  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.912206  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.922183  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.931875  610501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 12:55:13.941703  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.951407  610501 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.967696  610501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:55:13.977475  610501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 12:55:13.986454  610501 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 12:55:13.986509  610501 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 12:55:13.998511  610501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 12:55:14.007925  610501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:55:14.124297  610501 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 12:55:14.265547  610501 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 12:55:14.265641  610501 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 12:55:14.270847  610501 start.go:562] Will wait 60s for crictl version
	I0520 12:55:14.270917  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:55:14.274825  610501 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 12:55:14.318641  610501 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 12:55:14.318754  610501 ssh_runner.go:195] Run: crio --version
	I0520 12:55:14.346323  610501 ssh_runner.go:195] Run: crio --version
	I0520 12:55:14.377643  610501 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 12:55:14.379895  610501 main.go:141] libmachine: (addons-840762) Calling .GetIP
	I0520 12:55:14.382720  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:14.383143  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:14.383180  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:14.383427  610501 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 12:55:14.387501  610501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:55:14.399548  610501 kubeadm.go:877] updating cluster {Name:addons-840762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-840762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 12:55:14.399660  610501 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:55:14.399703  610501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 12:55:14.429577  610501 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 12:55:14.429652  610501 ssh_runner.go:195] Run: which lz4
	I0520 12:55:14.433365  610501 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 12:55:14.437014  610501 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 12:55:14.437053  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 12:55:15.637746  610501 crio.go:462] duration metric: took 1.204422377s to copy over tarball
	I0520 12:55:15.637823  610501 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 12:55:17.802635  610501 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.164782874s)
	I0520 12:55:17.802675  610501 crio.go:469] duration metric: took 2.164898269s to extract the tarball
	I0520 12:55:17.802686  610501 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 12:55:17.838706  610501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 12:55:17.877747  610501 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 12:55:17.877773  610501 cache_images.go:84] Images are preloaded, skipping loading
	I0520 12:55:17.877783  610501 kubeadm.go:928] updating node { 192.168.39.19 8443 v1.30.1 crio true true} ...
	I0520 12:55:17.877923  610501 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-840762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-840762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 12:55:17.878011  610501 ssh_runner.go:195] Run: crio config
	I0520 12:55:17.922732  610501 cni.go:84] Creating CNI manager for ""
	I0520 12:55:17.922758  610501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 12:55:17.922785  610501 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 12:55:17.922825  610501 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-840762 NodeName:addons-840762 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 12:55:17.922996  610501 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-840762"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 12:55:17.923077  610501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 12:55:17.932833  610501 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 12:55:17.932937  610501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 12:55:17.941978  610501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0520 12:55:17.957376  610501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 12:55:17.972370  610501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0520 12:55:17.987265  610501 ssh_runner.go:195] Run: grep 192.168.39.19	control-plane.minikube.internal$ /etc/hosts
	I0520 12:55:17.990708  610501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:55:18.001573  610501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:55:18.127654  610501 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:55:18.143797  610501 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762 for IP: 192.168.39.19
	I0520 12:55:18.143820  610501 certs.go:194] generating shared ca certs ...
	I0520 12:55:18.143842  610501 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.144003  610501 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 12:55:18.358697  610501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt ...
	I0520 12:55:18.358733  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt: {Name:mk0337969521f8fcb91840a13b9dacd1361e0416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.358935  610501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key ...
	I0520 12:55:18.358950  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key: {Name:mk0b3018854c3a76c6bc712c400145554051e5cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.359066  610501 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 12:55:18.637573  610501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt ...
	I0520 12:55:18.637611  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt: {Name:mk4030326ff4bd93acf0ae11bc67ee09461f2725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.637793  610501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key ...
	I0520 12:55:18.637804  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key: {Name:mk368b7d66fa86a67c9ef13f55a63c8fbe995e35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.637889  610501 certs.go:256] generating profile certs ...
	I0520 12:55:18.637948  610501 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.key
	I0520 12:55:18.637962  610501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt with IP's: []
	I0520 12:55:18.765434  610501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt ...
	I0520 12:55:18.765467  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: {Name:mk555ad1a22ae83e71bd1d88db4cd731d3a9df3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.765635  610501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.key ...
	I0520 12:55:18.765646  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.key: {Name:mkc4037f80e62a174b1c3df78060c4c466e65958 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.765712  610501 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key.1a6be2da
	I0520 12:55:18.765730  610501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt.1a6be2da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19]
	I0520 12:55:18.937615  610501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt.1a6be2da ...
	I0520 12:55:18.937656  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt.1a6be2da: {Name:mk5a01215158cf3231fad08bb78d8a3dfa212c05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.937851  610501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key.1a6be2da ...
	I0520 12:55:18.937873  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key.1a6be2da: {Name:mk298b016f1b857a88dbdb4cbaadf8e747393b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:18.937973  610501 certs.go:381] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt.1a6be2da -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt
	I0520 12:55:18.938079  610501 certs.go:385] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key.1a6be2da -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key
	I0520 12:55:18.938151  610501 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.key
	I0520 12:55:18.938179  610501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.crt with IP's: []
	I0520 12:55:19.226331  610501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.crt ...
	I0520 12:55:19.226369  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.crt: {Name:mk192ed701b920896d7fa7fbd1cf8e177461df3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:19.226564  610501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.key ...
	I0520 12:55:19.226582  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.key: {Name:mk3ad4b89a8ee430000e1f8b8ab63f33e943010e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:19.226798  610501 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 12:55:19.226843  610501 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 12:55:19.226878  610501 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 12:55:19.226916  610501 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 12:55:19.227551  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 12:55:19.253380  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 12:55:19.275654  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 12:55:19.297712  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 12:55:19.319707  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0520 12:55:19.341205  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 12:55:19.365239  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 12:55:19.390731  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 12:55:19.416007  610501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 12:55:19.438628  610501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 12:55:19.454417  610501 ssh_runner.go:195] Run: openssl version
	I0520 12:55:19.459803  610501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 12:55:19.471875  610501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:55:19.476597  610501 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:55:19.476677  610501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:55:19.483260  610501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 12:55:19.497343  610501 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 12:55:19.501416  610501 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 12:55:19.501498  610501 kubeadm.go:391] StartCluster: {Name:addons-840762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-840762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:55:19.501602  610501 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 12:55:19.501684  610501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 12:55:19.545075  610501 cri.go:89] found id: ""
	I0520 12:55:19.545173  610501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 12:55:19.554806  610501 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 12:55:19.568214  610501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 12:55:19.577374  610501 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 12:55:19.577399  610501 kubeadm.go:156] found existing configuration files:
	
	I0520 12:55:19.577443  610501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 12:55:19.585694  610501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 12:55:19.585763  610501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 12:55:19.594289  610501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 12:55:19.602494  610501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 12:55:19.602553  610501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 12:55:19.611323  610501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 12:55:19.619340  610501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 12:55:19.619399  610501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 12:55:19.628227  610501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 12:55:19.636652  610501 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 12:55:19.636728  610501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 12:55:19.645298  610501 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 12:55:19.702471  610501 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 12:55:19.702580  610501 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 12:55:19.825588  610501 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 12:55:19.825748  610501 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 12:55:19.825886  610501 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 12:55:20.025596  610501 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 12:55:20.083699  610501 out.go:204]   - Generating certificates and keys ...
	I0520 12:55:20.083850  610501 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 12:55:20.083934  610501 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 12:55:20.092217  610501 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 12:55:20.364436  610501 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 12:55:20.502138  610501 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 12:55:20.564527  610501 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 12:55:20.703162  610501 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 12:55:20.703407  610501 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-840762 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I0520 12:55:20.770361  610501 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 12:55:20.884233  610501 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-840762 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I0520 12:55:21.012631  610501 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 12:55:21.208632  610501 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 12:55:21.332544  610501 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 12:55:21.332752  610501 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 12:55:21.589278  610501 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 12:55:21.706399  610501 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 12:55:21.812525  610501 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 12:55:21.987255  610501 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 12:55:22.050057  610501 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 12:55:22.050588  610501 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 12:55:22.054797  610501 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 12:55:22.057239  610501 out.go:204]   - Booting up control plane ...
	I0520 12:55:22.057342  610501 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 12:55:22.057410  610501 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 12:55:22.057492  610501 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 12:55:22.071354  610501 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 12:55:22.072252  610501 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 12:55:22.072345  610501 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 12:55:22.194444  610501 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 12:55:22.194562  610501 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 12:55:23.195085  610501 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001405192s
	I0520 12:55:23.195201  610501 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 12:55:28.694415  610501 kubeadm.go:309] [api-check] The API server is healthy after 5.502847931s
	I0520 12:55:28.714022  610501 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 12:55:28.726753  610501 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 12:55:28.761883  610501 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 12:55:28.762170  610501 kubeadm.go:309] [mark-control-plane] Marking the node addons-840762 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 12:55:28.775335  610501 kubeadm.go:309] [bootstrap-token] Using token: ujdvgq.4r4gsjxdolox8f2t
	I0520 12:55:28.777700  610501 out.go:204]   - Configuring RBAC rules ...
	I0520 12:55:28.777840  610501 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 12:55:28.782202  610501 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 12:55:28.794168  610501 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 12:55:28.797442  610501 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 12:55:28.800674  610501 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 12:55:28.804165  610501 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 12:55:29.101623  610501 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 12:55:29.550656  610501 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 12:55:30.105708  610501 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 12:55:30.106638  610501 kubeadm.go:309] 
	I0520 12:55:30.106743  610501 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 12:55:30.106763  610501 kubeadm.go:309] 
	I0520 12:55:30.106876  610501 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 12:55:30.106899  610501 kubeadm.go:309] 
	I0520 12:55:30.106949  610501 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 12:55:30.107030  610501 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 12:55:30.107100  610501 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 12:55:30.107110  610501 kubeadm.go:309] 
	I0520 12:55:30.107159  610501 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 12:55:30.107165  610501 kubeadm.go:309] 
	I0520 12:55:30.107205  610501 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 12:55:30.107211  610501 kubeadm.go:309] 
	I0520 12:55:30.107253  610501 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 12:55:30.107333  610501 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 12:55:30.107424  610501 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 12:55:30.107431  610501 kubeadm.go:309] 
	I0520 12:55:30.107535  610501 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 12:55:30.107635  610501 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 12:55:30.107644  610501 kubeadm.go:309] 
	I0520 12:55:30.107756  610501 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ujdvgq.4r4gsjxdolox8f2t \
	I0520 12:55:30.107892  610501 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa \
	I0520 12:55:30.107936  610501 kubeadm.go:309] 	--control-plane 
	I0520 12:55:30.107945  610501 kubeadm.go:309] 
	I0520 12:55:30.108063  610501 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 12:55:30.108079  610501 kubeadm.go:309] 
	I0520 12:55:30.108173  610501 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ujdvgq.4r4gsjxdolox8f2t \
	I0520 12:55:30.108271  610501 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa 
	I0520 12:55:30.108549  610501 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 12:55:30.108578  610501 cni.go:84] Creating CNI manager for ""
	I0520 12:55:30.108590  610501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 12:55:30.111265  610501 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 12:55:30.113507  610501 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 12:55:30.123451  610501 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 12:55:30.139800  610501 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 12:55:30.139944  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-840762 minikube.k8s.io/updated_at=2024_05_20T12_55_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45 minikube.k8s.io/name=addons-840762 minikube.k8s.io/primary=true
	I0520 12:55:30.139947  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:30.244780  610501 ops.go:34] apiserver oom_adj: -16
	I0520 12:55:30.244858  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:30.745128  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:31.245492  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:31.745341  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:32.244914  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:32.745755  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:33.245160  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:33.745226  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:34.245731  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:34.745905  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:35.245566  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:35.745226  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:36.245227  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:36.745121  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:37.245280  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:37.745226  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:38.245665  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:38.745064  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:39.245512  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:39.745828  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:40.245009  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:40.745277  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:41.245343  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:41.745342  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:42.245464  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:42.745186  610501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:55:42.919515  610501 kubeadm.go:1107] duration metric: took 12.779637158s to wait for elevateKubeSystemPrivileges
	W0520 12:55:42.919570  610501 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 12:55:42.919582  610501 kubeadm.go:393] duration metric: took 23.418090172s to StartCluster
	I0520 12:55:42.919607  610501 settings.go:142] acquiring lock: {Name:mkfd2b5ade573ba8a1562334a87bc71c9f86de47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:42.919772  610501 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 12:55:42.920344  610501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/kubeconfig: {Name:mkf2ddc28a03db07b0a9823212c8450f3ae4cfa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:55:42.920956  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 12:55:42.921004  610501 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:55:42.923778  610501 out.go:177] * Verifying Kubernetes components...
	I0520 12:55:42.921047  610501 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0520 12:55:42.921275  610501 config.go:182] Loaded profile config "addons-840762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:55:42.926173  610501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:55:42.926185  610501 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-840762"
	I0520 12:55:42.926207  610501 addons.go:69] Setting inspektor-gadget=true in profile "addons-840762"
	I0520 12:55:42.926220  610501 addons.go:69] Setting metrics-server=true in profile "addons-840762"
	I0520 12:55:42.926235  610501 addons.go:69] Setting helm-tiller=true in profile "addons-840762"
	I0520 12:55:42.926254  610501 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-840762"
	I0520 12:55:42.926257  610501 addons.go:69] Setting cloud-spanner=true in profile "addons-840762"
	I0520 12:55:42.926263  610501 addons.go:69] Setting ingress-dns=true in profile "addons-840762"
	I0520 12:55:42.926270  610501 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-840762"
	I0520 12:55:42.926271  610501 addons.go:69] Setting storage-provisioner=true in profile "addons-840762"
	I0520 12:55:42.926277  610501 addons.go:234] Setting addon cloud-spanner=true in "addons-840762"
	I0520 12:55:42.926279  610501 addons.go:69] Setting gcp-auth=true in profile "addons-840762"
	I0520 12:55:42.926283  610501 addons.go:234] Setting addon ingress-dns=true in "addons-840762"
	I0520 12:55:42.926284  610501 addons.go:69] Setting default-storageclass=true in profile "addons-840762"
	I0520 12:55:42.926297  610501 mustload.go:65] Loading cluster: addons-840762
	I0520 12:55:42.926305  610501 addons.go:69] Setting registry=true in profile "addons-840762"
	I0520 12:55:42.926313  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926319  610501 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-840762"
	I0520 12:55:42.926323  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926321  610501 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-840762"
	I0520 12:55:42.926335  610501 addons.go:234] Setting addon registry=true in "addons-840762"
	I0520 12:55:42.926338  610501 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-840762"
	I0520 12:55:42.926364  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926510  610501 config.go:182] Loaded profile config "addons-840762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:55:42.926801  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.926249  610501 addons.go:234] Setting addon inspektor-gadget=true in "addons-840762"
	I0520 12:55:42.926249  610501 addons.go:234] Setting addon metrics-server=true in "addons-840762"
	I0520 12:55:42.926856  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.926862  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.926869  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926877  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.926889  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926258  610501 addons.go:69] Setting ingress=true in profile "addons-840762"
	I0520 12:55:42.926904  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.926907  610501 addons.go:69] Setting volumesnapshots=true in profile "addons-840762"
	I0520 12:55:42.926926  610501 addons.go:234] Setting addon ingress=true in "addons-840762"
	I0520 12:55:42.926932  610501 addons.go:234] Setting addon volumesnapshots=true in "addons-840762"
	I0520 12:55:42.926956  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926960  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.926250  610501 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-840762"
	I0520 12:55:42.927007  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.927203  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927223  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927277  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927304  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927313  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927321  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.926840  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927342  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.926273  610501 addons.go:234] Setting addon helm-tiller=true in "addons-840762"
	I0520 12:55:42.927353  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.926299  610501 addons.go:234] Setting addon storage-provisioner=true in "addons-840762"
	I0520 12:55:42.927324  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927371  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.926313  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.927403  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927420  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927438  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.926211  610501 addons.go:69] Setting yakd=true in profile "addons-840762"
	I0520 12:55:42.927468  610501 addons.go:234] Setting addon yakd=true in "addons-840762"
	I0520 12:55:42.927472  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927519  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.927850  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.927890  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.927962  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.928030  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.928378  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.928410  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.928472  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.928500  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.949431  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0520 12:55:42.949456  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44125
	I0520 12:55:42.949517  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42323
	I0520 12:55:42.949805  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0520 12:55:42.950251  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.950259  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.950280  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.950304  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.961815  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.961998  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.962130  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.962181  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I0520 12:55:42.962318  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0520 12:55:42.962475  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.962887  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.963010  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.963210  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.963226  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.963369  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.963380  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.963502  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.963513  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.963640  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.963651  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.963820  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.964552  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.964602  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.964934  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.964957  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.965029  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.965087  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.965217  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.965230  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.965317  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.965630  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.965679  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.965788  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.966394  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.966436  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.966662  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.966702  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:42.967039  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.967085  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.967295  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.967336  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.968919  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34005
	I0520 12:55:42.969170  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:42.969564  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.969595  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.969824  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.970420  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.970440  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.970891  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.971471  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.971504  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:42.983702  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38217
	I0520 12:55:42.989821  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:42.990621  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:42.990649  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:42.991055  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:42.991712  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:42.991761  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.002410  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33797
	I0520 12:55:43.003132  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.003287  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I0520 12:55:43.003423  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
	I0520 12:55:43.003921  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.004372  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I0520 12:55:43.004660  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.004675  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.004807  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.004818  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.004868  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.005179  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.005279  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.005691  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.005760  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0520 12:55:43.006499  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.006546  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.006783  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.007377  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.007400  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.007554  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.007567  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.008005  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.008037  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.008289  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34045
	I0520 12:55:43.008399  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.008419  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.008780  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.008992  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.009055  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.009063  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.009221  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.009752  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.009789  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.010310  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38685
	I0520 12:55:43.010592  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.010621  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.011044  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.011105  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.011348  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.011840  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.011881  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.012129  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.012289  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.012304  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.015140  610501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0520 12:55:43.012670  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.016270  610501 addons.go:234] Setting addon default-storageclass=true in "addons-840762"
	I0520 12:55:43.017402  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:43.017801  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.017842  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.020141  610501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 12:55:43.019350  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35553
	I0520 12:55:43.019379  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38743
	I0520 12:55:43.019420  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.021536  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42885
	I0520 12:55:43.022303  610501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 12:55:43.024787  610501 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 12:55:43.024809  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0520 12:55:43.024831  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.023254  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.023306  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.023310  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43629
	I0520 12:55:43.023345  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39199
	I0520 12:55:43.023354  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.026350  610501 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-840762"
	I0520 12:55:43.026398  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:43.026788  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.026828  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.027387  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34397
	I0520 12:55:43.027626  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.027638  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.028051  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.028314  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0520 12:55:43.028592  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.028611  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.029136  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.029215  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.029238  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.029295  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.029296  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.029315  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.029346  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.029505  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.029572  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.029626  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.029815  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.029880  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.030169  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.030776  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.030822  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.031146  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.031163  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.031323  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.034413  610501 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0520 12:55:43.031845  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.031879  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.031970  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.032176  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.032375  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.036749  610501 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 12:55:43.036763  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0520 12:55:43.036787  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.037457  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.037481  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.037723  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.037740  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.037816  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.038160  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.038379  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.038890  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I0520 12:55:43.039115  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37507
	I0520 12:55:43.039514  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.039999  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.040190  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.040214  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.040290  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.040641  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.040675  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.040795  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.040809  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.040858  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.040862  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.043266  610501 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.0
	I0520 12:55:43.041720  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.042600  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.042944  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.043023  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.043541  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.044232  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.044298  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.044797  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.045484  610501 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0520 12:55:43.045497  610501 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0520 12:55:43.045518  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.045599  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.045639  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.045667  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.048013  610501 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0520 12:55:43.046613  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.046718  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.046798  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.048712  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0520 12:55:43.048712  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46763
	I0520 12:55:43.049336  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.050022  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.050433  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.050492  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0520 12:55:43.050855  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.051378  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38695
	I0520 12:55:43.052681  610501 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0520 12:55:43.052723  610501 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0520 12:55:43.052806  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.053613  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.053643  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.054003  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.054050  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.055062  610501 out.go:177]   - Using image docker.io/registry:2.8.3
	I0520 12:55:43.055263  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.055499  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.057312  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.057625  610501 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 12:55:43.058419  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.058451  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.058549  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.059404  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I0520 12:55:43.059425  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0520 12:55:43.059434  610501 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 12:55:43.059639  610501 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0520 12:55:43.059783  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.060180  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.060216  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42655
	I0520 12:55:43.061533  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.061624  610501 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0520 12:55:43.061636  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.061845  610501 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0520 12:55:43.061908  610501 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 12:55:43.061914  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0520 12:55:43.062300  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.063479  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.063614  610501 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 12:55:43.063635  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 12:55:43.063653  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.063658  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0520 12:55:43.063674  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.063734  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.063764  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.063794  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.064448  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.064498  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.064525  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.064620  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.064627  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.064717  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.064761  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.064800  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.065417  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.069387  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.069428  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.069390  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.069457  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.069560  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.069579  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.073328  610501 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0520 12:55:43.070346  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.070518  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.071145  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.071398  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.071491  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.072013  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.072535  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.073487  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.073620  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.074419  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.074767  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.074877  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42271
	I0520 12:55:43.075149  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.076245  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0520 12:55:43.076377  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.076274  610501 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0520 12:55:43.076312  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.076399  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.076485  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.076491  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.076518  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.076262  610501 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 12:55:43.076630  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.076642  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.076689  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.076799  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.076883  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.076944  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:43.077301  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.078456  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0520 12:55:43.078484  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.078503  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.078554  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.078573  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.078590  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0520 12:55:43.078624  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:43.078637  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.078804  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.078805  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.078813  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.078827  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.078918  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.079277  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.080942  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0520 12:55:43.080976  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.083192  610501 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0520 12:55:43.083214  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0520 12:55:43.083235  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.083265  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.083750  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.083781  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.083802  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.083820  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.083933  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.084415  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.085529  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0520 12:55:43.086510  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.087336  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.087938  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.088370  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.089371  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0520 12:55:43.088654  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.088714  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.089430  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.089714  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.091686  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0520 12:55:43.091791  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.091960  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.091975  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.093882  610501 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0520 12:55:43.096277  610501 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0520 12:55:43.096308  610501 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0520 12:55:43.096334  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.093957  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.094177  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.094370  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.097828  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43411
	I0520 12:55:43.098616  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0520 12:55:43.098900  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.098969  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.099332  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.099866  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.100798  610501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0520 12:55:43.102742  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0520 12:55:43.102765  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0520 12:55:43.100830  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.102790  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.100561  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.102802  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.102789  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39027
	I0520 12:55:43.101525  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.102862  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.103030  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.103224  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.103401  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.103410  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.103779  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:43.103815  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.105233  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:43.105267  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:43.105428  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.105719  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.105859  610501 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 12:55:43.105875  610501 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 12:55:43.105887  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.106101  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.106122  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.106160  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:43.106373  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.106425  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:43.106575  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.106861  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.107019  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.108154  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:43.110645  610501 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0520 12:55:43.108938  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.110686  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.109448  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.113428  610501 out.go:177]   - Using image docker.io/busybox:stable
	I0520 12:55:43.115405  610501 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 12:55:43.113449  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.113676  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.115433  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0520 12:55:43.115464  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:43.115705  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.115895  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.118641  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.119117  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:43.119150  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:43.119343  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:43.119533  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:43.119694  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:43.119816  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:43.573616  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 12:55:43.619918  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 12:55:43.623606  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 12:55:43.643211  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0520 12:55:43.683331  610501 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:55:43.683420  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 12:55:43.685462  610501 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0520 12:55:43.685482  610501 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0520 12:55:43.701839  610501 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0520 12:55:43.701864  610501 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0520 12:55:43.716671  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 12:55:43.728860  610501 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0520 12:55:43.728882  610501 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0520 12:55:43.749092  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 12:55:43.752362  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 12:55:43.759380  610501 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0520 12:55:43.759401  610501 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0520 12:55:43.768880  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0520 12:55:43.768902  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0520 12:55:43.776942  610501 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0520 12:55:43.776981  610501 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0520 12:55:43.794490  610501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 12:55:43.794512  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0520 12:55:43.876312  610501 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0520 12:55:43.876350  610501 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0520 12:55:43.928322  610501 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0520 12:55:43.928352  610501 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0520 12:55:43.980917  610501 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0520 12:55:43.980943  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0520 12:55:43.985401  610501 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0520 12:55:43.985423  610501 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0520 12:55:44.010497  610501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 12:55:44.010530  610501 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 12:55:44.025070  610501 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0520 12:55:44.025103  610501 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0520 12:55:44.025300  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0520 12:55:44.025326  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0520 12:55:44.097831  610501 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0520 12:55:44.097860  610501 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0520 12:55:44.099542  610501 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0520 12:55:44.099567  610501 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0520 12:55:44.109990  610501 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0520 12:55:44.110015  610501 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0520 12:55:44.125277  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0520 12:55:44.152567  610501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 12:55:44.152593  610501 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 12:55:44.183917  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0520 12:55:44.199196  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0520 12:55:44.199234  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0520 12:55:44.278037  610501 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0520 12:55:44.278067  610501 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0520 12:55:44.293166  610501 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0520 12:55:44.293217  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0520 12:55:44.297324  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0520 12:55:44.297351  610501 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0520 12:55:44.315561  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 12:55:44.346264  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0520 12:55:44.346298  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0520 12:55:44.453370  610501 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0520 12:55:44.453396  610501 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0520 12:55:44.510982  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0520 12:55:44.586650  610501 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 12:55:44.586684  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0520 12:55:44.611553  610501 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0520 12:55:44.611584  610501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0520 12:55:44.726323  610501 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0520 12:55:44.726349  610501 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0520 12:55:44.881456  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0520 12:55:44.881482  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0520 12:55:44.890866  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 12:55:44.927590  610501 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 12:55:44.927619  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0520 12:55:45.137317  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0520 12:55:45.137345  610501 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0520 12:55:45.209075  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 12:55:45.441214  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0520 12:55:45.441241  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0520 12:55:45.828932  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0520 12:55:45.828994  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0520 12:55:46.257170  610501 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 12:55:46.257208  610501 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0520 12:55:46.498819  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 12:55:47.266993  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.693329132s)
	I0520 12:55:47.267056  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:47.267070  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:47.267417  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:47.267482  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:47.267504  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:47.267520  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:47.267530  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:47.267892  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:47.267912  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:50.073084  610501 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0520 12:55:50.073138  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:50.076118  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:50.076632  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:50.076665  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:50.076958  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:50.077217  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:50.077455  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:50.077652  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:50.468021  610501 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0520 12:55:50.521617  610501 addons.go:234] Setting addon gcp-auth=true in "addons-840762"
	I0520 12:55:50.521694  610501 host.go:66] Checking if "addons-840762" exists ...
	I0520 12:55:50.522184  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:50.522239  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:50.553174  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42275
	I0520 12:55:50.553754  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:50.554480  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:50.554514  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:50.554880  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:50.555571  610501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:55:50.555609  610501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:55:50.572015  610501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44627
	I0520 12:55:50.572479  610501 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:55:50.573041  610501 main.go:141] libmachine: Using API Version  1
	I0520 12:55:50.573078  610501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:55:50.573484  610501 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:55:50.573698  610501 main.go:141] libmachine: (addons-840762) Calling .GetState
	I0520 12:55:50.575484  610501 main.go:141] libmachine: (addons-840762) Calling .DriverName
	I0520 12:55:50.575739  610501 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0520 12:55:50.575769  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHHostname
	I0520 12:55:50.579095  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:50.579655  610501 main.go:141] libmachine: (addons-840762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4e:d2", ip: ""} in network mk-addons-840762: {Iface:virbr1 ExpiryTime:2024-05-20 13:55:04 +0000 UTC Type:0 Mac:52:54:00:0f:4e:d2 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-840762 Clientid:01:52:54:00:0f:4e:d2}
	I0520 12:55:50.579690  610501 main.go:141] libmachine: (addons-840762) DBG | domain addons-840762 has defined IP address 192.168.39.19 and MAC address 52:54:00:0f:4e:d2 in network mk-addons-840762
	I0520 12:55:50.579792  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHPort
	I0520 12:55:50.580013  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHKeyPath
	I0520 12:55:50.580346  610501 main.go:141] libmachine: (addons-840762) Calling .GetSSHUsername
	I0520 12:55:50.580587  610501 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/addons-840762/id_rsa Username:docker}
	I0520 12:55:51.388578  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.768609397s)
	I0520 12:55:51.388647  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.388650  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.765002737s)
	I0520 12:55:51.388698  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.388707  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.745461801s)
	I0520 12:55:51.388717  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.388734  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.388746  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.388661  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.388887  610501 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.705429993s)
	I0520 12:55:51.388915  610501 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0520 12:55:51.388936  610501 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.705565209s)
	I0520 12:55:51.389084  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.389097  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.389107  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.389116  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.389209  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.389232  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.389259  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.389270  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.389296  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.389326  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.389343  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.389349  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.389360  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.389372  610501 addons.go:470] Verifying addon ingress=true in "addons-840762"
	I0520 12:55:51.389379  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.672674787s)
	I0520 12:55:51.389405  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.389425  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.392370  610501 out.go:177] * Verifying ingress addon...
	I0520 12:55:51.389528  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.640405425s)
	I0520 12:55:51.389584  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.637201371s)
	I0520 12:55:51.389624  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.264322968s)
	I0520 12:55:51.389661  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.20570414s)
	I0520 12:55:51.389732  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.074144721s)
	I0520 12:55:51.389772  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.878747953s)
	I0520 12:55:51.389865  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.498962158s)
	I0520 12:55:51.389933  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.180826737s)
	I0520 12:55:51.389965  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.389973  610501 node_ready.go:35] waiting up to 6m0s for node "addons-840762" to be "Ready" ...
	I0520 12:55:51.389991  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.390011  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.389352  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.390014  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.394170  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394193  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394192  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.394207  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394227  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394229  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394240  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394253  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394268  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394281  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394210  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394296  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394300  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394296  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.394313  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394328  610501 main.go:141] libmachine: Making call to close driver server
	W0520 12:55:51.394209  610501 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 12:55:51.394339  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394339  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.394380  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.394387  610501 retry.go:31] will retry after 303.389823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 12:55:51.395046  610501 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0520 12:55:51.395166  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395197  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395199  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395214  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395218  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395233  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395245  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395262  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395272  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395276  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395280  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.395288  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.395291  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395307  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395313  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395321  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.395263  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395338  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395345  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395354  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395361  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.395367  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.395429  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.395448  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.395459  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395466  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.395481  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.395204  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.395327  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.395347  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.395846  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.396442  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.396480  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.396488  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.398870  610501 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-840762 service yakd-dashboard -n yakd-dashboard
	
	I0520 12:55:51.396611  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.396643  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.396663  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.396677  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.396695  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.396696  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.396721  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.396732  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.396855  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.400153  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.400898  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.400913  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.400902  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.400962  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.400970  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.400973  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.400980  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.400990  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.401004  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.400980  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.401036  610501 addons.go:470] Verifying addon metrics-server=true in "addons-840762"
	I0520 12:55:51.400203  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.401684  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.401704  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.401745  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.402068  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.402086  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.402091  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:51.402097  610501 addons.go:470] Verifying addon registry=true in "addons-840762"
	I0520 12:55:51.405187  610501 out.go:177] * Verifying registry addon...
	I0520 12:55:51.408123  610501 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0520 12:55:51.437541  610501 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0520 12:55:51.437563  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:51.449131  610501 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0520 12:55:51.449151  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:51.457909  610501 node_ready.go:49] node "addons-840762" has status "Ready":"True"
	I0520 12:55:51.457932  610501 node_ready.go:38] duration metric: took 63.66746ms for node "addons-840762" to be "Ready" ...
	I0520 12:55:51.457941  610501 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 12:55:51.478924  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.478955  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.479239  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:51.479251  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.479266  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:51.479268  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:51.479509  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:51.479526  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	W0520 12:55:51.479651  610501 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0520 12:55:51.494377  610501 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bxb6r" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.508970  610501 pod_ready.go:92] pod "coredns-7db6d8ff4d-bxb6r" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:51.508991  610501 pod_ready.go:81] duration metric: took 14.583357ms for pod "coredns-7db6d8ff4d-bxb6r" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.509001  610501 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vp4g8" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.544741  610501 pod_ready.go:92] pod "coredns-7db6d8ff4d-vp4g8" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:51.544772  610501 pod_ready.go:81] duration metric: took 35.763404ms for pod "coredns-7db6d8ff4d-vp4g8" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.544784  610501 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.576819  610501 pod_ready.go:92] pod "etcd-addons-840762" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:51.576843  610501 pod_ready.go:81] duration metric: took 32.050234ms for pod "etcd-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.576852  610501 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.592484  610501 pod_ready.go:92] pod "kube-apiserver-addons-840762" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:51.592520  610501 pod_ready.go:81] duration metric: took 15.660119ms for pod "kube-apiserver-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.592536  610501 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.698831  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 12:55:51.797633  610501 pod_ready.go:92] pod "kube-controller-manager-addons-840762" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:51.797657  610501 pod_ready.go:81] duration metric: took 205.113267ms for pod "kube-controller-manager-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.797669  610501 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mpkr9" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:51.892953  610501 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-840762" context rescaled to 1 replicas
	I0520 12:55:51.899463  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:51.912554  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:52.200864  610501 pod_ready.go:92] pod "kube-proxy-mpkr9" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:52.200894  610501 pod_ready.go:81] duration metric: took 403.210884ms for pod "kube-proxy-mpkr9" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:52.200908  610501 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:52.404611  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:52.417071  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:52.607922  610501 pod_ready.go:92] pod "kube-scheduler-addons-840762" in "kube-system" namespace has status "Ready":"True"
	I0520 12:55:52.607946  610501 pod_ready.go:81] duration metric: took 407.031521ms for pod "kube-scheduler-addons-840762" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:52.607957  610501 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace to be "Ready" ...
	I0520 12:55:52.938316  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:52.939704  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:53.105590  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.606697582s)
	I0520 12:55:53.105615  610501 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.529845767s)
	I0520 12:55:53.105664  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:53.105679  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:53.108268  610501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 12:55:53.105995  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:53.106025  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:53.110677  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:53.110703  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:53.112892  610501 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0520 12:55:53.110719  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:53.115284  610501 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0520 12:55:53.115305  610501 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0520 12:55:53.115627  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:53.115673  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:53.115691  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:53.115708  610501 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-840762"
	I0520 12:55:53.118485  610501 out.go:177] * Verifying csi-hostpath-driver addon...
	I0520 12:55:53.122364  610501 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0520 12:55:53.138587  610501 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0520 12:55:53.138615  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:53.192835  610501 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0520 12:55:53.192870  610501 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0520 12:55:53.284131  610501 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 12:55:53.284160  610501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0520 12:55:53.399393  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:53.413779  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:53.418308  610501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 12:55:53.628280  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:53.677186  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.97829441s)
	I0520 12:55:53.677265  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:53.677280  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:53.677596  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:53.677626  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:53.677630  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:53.677637  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:53.677662  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:53.677944  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:53.677959  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:53.903390  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:53.913905  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:54.129023  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:54.400578  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:54.414433  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:54.634153  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:54.639118  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:55:54.957073  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:54.957497  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:54.969504  610501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.551140451s)
	I0520 12:55:54.969566  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:54.969580  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:54.969979  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:54.969997  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:54.969998  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:54.970008  610501 main.go:141] libmachine: Making call to close driver server
	I0520 12:55:54.970019  610501 main.go:141] libmachine: (addons-840762) Calling .Close
	I0520 12:55:54.970333  610501 main.go:141] libmachine: (addons-840762) DBG | Closing plugin on server side
	I0520 12:55:54.970359  610501 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:55:54.970372  610501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:55:54.971645  610501 addons.go:470] Verifying addon gcp-auth=true in "addons-840762"
	I0520 12:55:54.974788  610501 out.go:177] * Verifying gcp-auth addon...
	I0520 12:55:54.977686  610501 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0520 12:55:54.992478  610501 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0520 12:55:54.992501  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:55.127400  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:55.399268  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:55.413367  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:55.481152  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:55.627381  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:55.916014  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:55.918171  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:55.981718  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:56.127730  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:56.399560  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:56.413077  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:56.482224  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:56.627478  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:56.900468  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:56.912466  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:56.981665  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:57.115037  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:55:57.130520  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:57.400035  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:57.413623  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:57.481613  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:57.629820  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:57.900120  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:57.915039  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:57.981464  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:58.127457  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:58.400777  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:58.414573  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:58.481462  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:58.628832  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:58.899601  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:58.914331  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:58.982255  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:59.115366  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:55:59.133101  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:59.401812  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:59.419535  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:59.481225  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:55:59.631104  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:55:59.902353  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:55:59.912317  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:55:59.981330  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:00.128485  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:00.401561  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:00.430286  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:00.482144  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:00.628293  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:00.899691  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:00.915101  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:00.982008  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:01.129239  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:01.399224  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:01.414726  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:01.481942  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:01.616921  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:01.628729  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:01.900780  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:01.913368  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:01.981214  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:02.127371  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:02.401377  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:02.414207  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:02.482101  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:02.627879  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:02.900216  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:02.914014  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:02.982218  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:03.130013  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:03.400273  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:03.413347  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:03.481203  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:03.629010  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:03.899658  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:03.913498  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:04.022081  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:04.115681  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:04.128931  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:04.399719  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:04.413265  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:04.480949  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:04.630465  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:04.901162  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:04.915827  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:04.982611  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:05.127045  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:05.399804  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:05.413527  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:05.482587  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:05.628542  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:05.900077  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:05.913575  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:05.981299  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:06.131335  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:06.399067  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:06.413005  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:06.482481  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:06.617357  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:06.629066  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:06.899839  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:06.913012  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:06.982047  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:07.132364  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:07.399705  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:07.417400  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:07.481431  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:07.628233  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:07.900194  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:07.912856  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:07.981096  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:08.130863  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:08.399114  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:08.421325  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:08.488216  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:08.626810  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:08.899746  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:08.913412  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:08.981447  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:09.114772  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:09.127612  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:09.399816  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:09.414275  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:09.481644  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:09.628774  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:09.900228  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:09.915686  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:09.983410  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:10.128911  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:10.399503  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:10.413047  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:10.482114  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:10.627627  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:10.900120  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:10.912741  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:10.981653  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:11.127586  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:11.399736  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:11.415842  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:11.482111  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:11.616098  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:11.631401  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:11.899584  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:11.914011  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:11.982488  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:12.133642  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:12.404826  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:12.415781  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:12.482240  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:12.627875  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:12.900429  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:12.913578  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:12.982373  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:13.128350  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:13.400020  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:13.412649  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:13.481828  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:13.627553  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:13.899893  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:13.912654  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:13.981503  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:14.115122  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:14.129175  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:14.400146  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:14.413152  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:14.481089  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:14.628054  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:14.900376  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:14.920739  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:14.982583  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:15.127618  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:15.400262  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:15.415277  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:15.482039  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:15.627946  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:15.900718  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:15.912777  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:15.982140  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:16.129993  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:16.399519  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:16.412993  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:16.482054  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:16.614742  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:16.628387  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:16.902864  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:16.916738  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:16.982514  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:17.127713  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:17.398762  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:17.416228  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:17.481442  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:17.628109  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:17.901062  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:17.915833  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:17.983591  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:18.128602  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:18.400312  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:18.413380  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:18.481469  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:18.627648  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:18.900162  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:18.913170  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:18.981679  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:19.114147  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:19.127641  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:19.399059  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:19.416675  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:19.481893  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:19.628587  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:19.901500  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:19.914861  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:19.982268  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:20.127892  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:20.400086  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:20.412871  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:20.481643  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:20.631895  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:20.899376  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:20.913218  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:20.983029  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:21.115273  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:21.128235  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:21.398928  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:21.412581  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:21.481844  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:21.628150  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:21.899645  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:21.913721  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:21.981633  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:22.127985  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:22.400392  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:22.413600  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:22.482801  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:22.628019  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:22.900239  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:22.913015  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:22.981463  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:23.139117  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:23.140261  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:23.399288  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:23.415368  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:23.481661  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:23.629617  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:23.902440  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:23.915257  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:23.981352  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:24.129929  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:24.399488  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:24.413165  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:24.482158  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:24.627817  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:24.899083  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:24.915671  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:24.981425  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:25.127985  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:25.399318  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:25.413105  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:25.482011  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:25.613886  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:25.627368  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:25.902246  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:25.912609  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:25.981536  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:26.129732  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:26.529301  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:26.529596  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:26.529663  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:26.633421  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:26.901177  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:26.915422  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:26.981413  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:27.127789  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:27.398754  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:27.413042  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:27.482631  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:27.614073  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:27.629448  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:27.900640  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:27.913221  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:27.981368  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:28.132334  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:28.399797  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:28.413632  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:28.481152  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:28.628716  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:28.900159  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:28.914554  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 12:56:28.981591  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:29.127504  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:29.399722  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:29.414894  610501 kapi.go:107] duration metric: took 38.006762133s to wait for kubernetes.io/minikube-addons=registry ...
	I0520 12:56:29.481634  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:29.614187  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:29.627857  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:29.899322  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:29.981345  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:30.128550  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:30.400316  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:30.481555  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:30.627746  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:30.900189  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:30.982356  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:31.129538  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:31.400422  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:31.481492  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:31.629916  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:31.899144  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:31.981857  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:32.114220  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:32.127498  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:32.399699  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:32.482072  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:32.651101  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:32.899211  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:32.981322  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:33.127482  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:33.401190  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:33.501374  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:33.628422  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:33.900401  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:33.981380  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:34.127915  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:34.400211  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:34.484293  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:34.614543  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:34.627483  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:34.902843  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:34.981683  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:35.127848  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:35.398956  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:35.481444  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:35.626983  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:35.900313  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:35.980852  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:36.128263  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:36.401318  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:36.482199  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:36.616548  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:36.628510  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:37.039771  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:37.040297  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:37.128332  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:37.399002  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:37.481655  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:37.627644  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:37.900542  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:37.981657  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:38.127698  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:38.399200  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:38.481409  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:38.628445  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:38.899393  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:38.981201  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:39.370826  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:39.372189  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:39.399948  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:39.481855  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:39.627676  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:39.898860  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:39.981735  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:40.128056  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:40.399370  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:40.481858  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:40.628636  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:40.900139  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:40.982329  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:41.130978  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:41.399499  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:41.481032  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:41.614210  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:41.627128  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:41.899422  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:41.981776  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:42.127905  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:42.398936  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:42.481585  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:42.629134  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:42.899492  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:42.982922  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:43.127672  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:43.400155  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:43.481991  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:43.615112  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:43.629339  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:43.899804  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:43.983481  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:44.127535  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:44.399564  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:44.481474  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:44.633982  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:44.899485  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:44.981347  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:45.127532  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:45.413987  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:45.481650  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:45.615259  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:45.629151  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:45.899534  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:45.981133  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:46.127626  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:46.401424  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:46.481108  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:46.626748  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:46.899481  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:46.983910  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:47.127352  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:47.400499  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:47.481216  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:47.629148  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:47.899944  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:47.981178  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:48.114820  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:48.126832  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:48.400385  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:48.481113  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:48.627340  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:48.900317  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:48.982939  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:49.440975  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:49.448941  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:49.483270  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:49.627430  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:49.899374  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:49.983132  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:50.127931  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:50.404223  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:50.482231  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:50.613962  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:50.627506  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:50.901701  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:50.981212  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:51.253571  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:51.400214  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:51.485666  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:51.628816  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:51.899909  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:51.981764  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:52.132414  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:52.400653  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:52.482230  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:52.627845  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:52.901162  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:52.981128  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:53.114152  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:53.127321  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:53.399495  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:53.480504  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:53.627259  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:53.899327  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:53.982045  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:54.126980  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:54.400103  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:54.482185  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:54.630283  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:54.899841  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:54.982038  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:55.127806  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:55.400082  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:55.482058  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:55.614659  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:55.628985  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:55.899964  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:55.981440  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:56.145450  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:56.400153  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:56.481988  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:56.627636  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:56.903212  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:56.985482  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:57.127953  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:57.405938  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:57.480991  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:57.615293  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:57.627790  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:57.899165  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:57.981629  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:58.295639  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:58.401472  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:58.480992  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:58.628426  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:58.899375  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:58.982298  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:59.128070  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:59.399507  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:59.484338  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:56:59.630551  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:56:59.636538  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:56:59.900561  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:56:59.982224  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:00.129894  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:00.399561  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:00.482729  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:00.627508  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:00.903740  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:00.981954  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:01.133438  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:01.399150  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:01.481779  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:01.630056  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:02.352725  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:02.353084  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:02.353297  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:02.357311  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:02.399678  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:02.481822  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:02.627596  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:02.899845  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:02.981411  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:03.127911  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:03.398988  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:03.481636  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:03.632574  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:03.899755  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:03.981290  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:04.128310  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:04.414840  610501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 12:57:04.481441  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:04.613658  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:04.629956  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:04.901144  610501 kapi.go:107] duration metric: took 1m13.506095567s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0520 12:57:04.981604  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:05.128191  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:05.481173  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:05.628513  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:05.982076  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:06.127702  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:06.481434  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:06.614389  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:06.627307  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:06.981074  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:07.127319  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:07.481753  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:07.627396  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:07.981256  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:08.127837  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:08.483769  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:08.627352  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:09.127470  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:09.132668  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:09.143694  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:09.480949  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 12:57:09.627347  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:09.982170  610501 kapi.go:107] duration metric: took 1m15.004478307s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0520 12:57:09.984996  610501 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-840762 cluster.
	I0520 12:57:09.987400  610501 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0520 12:57:09.989848  610501 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0520 12:57:10.128713  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:10.626906  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:11.126993  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:11.615193  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:11.627544  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:12.127562  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:12.627291  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:13.127538  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:13.615932  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:13.627132  610501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 12:57:14.129554  610501 kapi.go:107] duration metric: took 1m21.00719057s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0520 12:57:14.132384  610501 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, cloud-spanner, yakd, helm-tiller, ingress-dns, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0520 12:57:14.134475  610501 addons.go:505] duration metric: took 1m31.2134234s for enable addons: enabled=[storage-provisioner nvidia-device-plugin cloud-spanner yakd helm-tiller ingress-dns metrics-server storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0520 12:57:16.114935  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:18.615065  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:21.115704  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:23.614492  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:25.615476  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:28.115096  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:30.613576  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:32.615824  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:35.114244  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:37.114736  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:39.115280  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:41.616112  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:44.115963  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:46.613676  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:48.615457  610501 pod_ready.go:102] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"False"
	I0520 12:57:49.115531  610501 pod_ready.go:92] pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace has status "Ready":"True"
	I0520 12:57:49.115556  610501 pod_ready.go:81] duration metric: took 1m56.507573924s for pod "metrics-server-c59844bb4-8g977" in "kube-system" namespace to be "Ready" ...
	I0520 12:57:49.115567  610501 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-w5d66" in "kube-system" namespace to be "Ready" ...
	I0520 12:57:49.120872  610501 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-w5d66" in "kube-system" namespace has status "Ready":"True"
	I0520 12:57:49.120891  610501 pod_ready.go:81] duration metric: took 5.316291ms for pod "nvidia-device-plugin-daemonset-w5d66" in "kube-system" namespace to be "Ready" ...
	I0520 12:57:49.120917  610501 pod_ready.go:38] duration metric: took 1m57.662965814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 12:57:49.120943  610501 api_server.go:52] waiting for apiserver process to appear ...
	I0520 12:57:49.121015  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 12:57:49.121087  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 12:57:49.196694  610501 cri.go:89] found id: "9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:49.196728  610501 cri.go:89] found id: ""
	I0520 12:57:49.196740  610501 logs.go:276] 1 containers: [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34]
	I0520 12:57:49.196806  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.201213  610501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 12:57:49.201309  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 12:57:49.261920  610501 cri.go:89] found id: "10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:49.261956  610501 cri.go:89] found id: ""
	I0520 12:57:49.261967  610501 logs.go:276] 1 containers: [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e]
	I0520 12:57:49.262042  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.265960  610501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 12:57:49.266026  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 12:57:49.311594  610501 cri.go:89] found id: "7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:49.311616  610501 cri.go:89] found id: ""
	I0520 12:57:49.311624  610501 logs.go:276] 1 containers: [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0]
	I0520 12:57:49.311677  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.315953  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 12:57:49.316040  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 12:57:49.364885  610501 cri.go:89] found id: "6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:49.364924  610501 cri.go:89] found id: ""
	I0520 12:57:49.364932  610501 logs.go:276] 1 containers: [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a]
	I0520 12:57:49.364988  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.369010  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 12:57:49.369072  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 12:57:49.424747  610501 cri.go:89] found id: "a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:49.424768  610501 cri.go:89] found id: ""
	I0520 12:57:49.424776  610501 logs.go:276] 1 containers: [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c]
	I0520 12:57:49.424834  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.428991  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 12:57:49.429080  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 12:57:49.499475  610501 cri.go:89] found id: "6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:49.499510  610501 cri.go:89] found id: ""
	I0520 12:57:49.499523  610501 logs.go:276] 1 containers: [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865]
	I0520 12:57:49.499594  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:49.504418  610501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 12:57:49.504502  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 12:57:49.561072  610501 cri.go:89] found id: ""
	I0520 12:57:49.561100  610501 logs.go:276] 0 containers: []
	W0520 12:57:49.561113  610501 logs.go:278] No container was found matching "kindnet"
	I0520 12:57:49.561123  610501 logs.go:123] Gathering logs for kubelet ...
	I0520 12:57:49.561138  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 12:57:49.654245  610501 logs.go:123] Gathering logs for etcd [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e] ...
	I0520 12:57:49.654289  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:49.728091  610501 logs.go:123] Gathering logs for coredns [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0] ...
	I0520 12:57:49.728129  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:49.807124  610501 logs.go:123] Gathering logs for kube-controller-manager [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865] ...
	I0520 12:57:49.807159  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:49.880558  610501 logs.go:123] Gathering logs for container status ...
	I0520 12:57:49.880602  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 12:57:49.936020  610501 logs.go:123] Gathering logs for dmesg ...
	I0520 12:57:49.936062  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 12:57:49.950180  610501 logs.go:123] Gathering logs for describe nodes ...
	I0520 12:57:49.950226  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 12:57:50.132293  610501 logs.go:123] Gathering logs for kube-apiserver [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34] ...
	I0520 12:57:50.132328  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:50.176058  610501 logs.go:123] Gathering logs for kube-scheduler [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a] ...
	I0520 12:57:50.176093  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:50.218071  610501 logs.go:123] Gathering logs for kube-proxy [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c] ...
	I0520 12:57:50.218105  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:50.255262  610501 logs.go:123] Gathering logs for CRI-O ...
	I0520 12:57:50.255300  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 12:57:53.392370  610501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:57:53.425325  610501 api_server.go:72] duration metric: took 2m10.504279951s to wait for apiserver process to appear ...
	I0520 12:57:53.425356  610501 api_server.go:88] waiting for apiserver healthz status ...
	I0520 12:57:53.425406  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 12:57:53.425466  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 12:57:53.460785  610501 cri.go:89] found id: "9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:53.460818  610501 cri.go:89] found id: ""
	I0520 12:57:53.460830  610501 logs.go:276] 1 containers: [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34]
	I0520 12:57:53.460890  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.464985  610501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 12:57:53.465054  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 12:57:53.500156  610501 cri.go:89] found id: "10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:53.500182  610501 cri.go:89] found id: ""
	I0520 12:57:53.500192  610501 logs.go:276] 1 containers: [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e]
	I0520 12:57:53.500268  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.504273  610501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 12:57:53.504349  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 12:57:53.542028  610501 cri.go:89] found id: "7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:53.542056  610501 cri.go:89] found id: ""
	I0520 12:57:53.542068  610501 logs.go:276] 1 containers: [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0]
	I0520 12:57:53.542122  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.546279  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 12:57:53.546355  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 12:57:53.583434  610501 cri.go:89] found id: "6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:53.583471  610501 cri.go:89] found id: ""
	I0520 12:57:53.583481  610501 logs.go:276] 1 containers: [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a]
	I0520 12:57:53.583549  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.587699  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 12:57:53.587757  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 12:57:53.629320  610501 cri.go:89] found id: "a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:53.629350  610501 cri.go:89] found id: ""
	I0520 12:57:53.629359  610501 logs.go:276] 1 containers: [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c]
	I0520 12:57:53.629420  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.633673  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 12:57:53.633735  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 12:57:53.670154  610501 cri.go:89] found id: "6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:53.670182  610501 cri.go:89] found id: ""
	I0520 12:57:53.670192  610501 logs.go:276] 1 containers: [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865]
	I0520 12:57:53.670259  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:53.674100  610501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 12:57:53.674173  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 12:57:53.711324  610501 cri.go:89] found id: ""
	I0520 12:57:53.711357  610501 logs.go:276] 0 containers: []
	W0520 12:57:53.711365  610501 logs.go:278] No container was found matching "kindnet"
	I0520 12:57:53.711380  610501 logs.go:123] Gathering logs for dmesg ...
	I0520 12:57:53.711400  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 12:57:53.730840  610501 logs.go:123] Gathering logs for describe nodes ...
	I0520 12:57:53.730875  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 12:57:53.852051  610501 logs.go:123] Gathering logs for kube-apiserver [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34] ...
	I0520 12:57:53.852082  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:53.901591  610501 logs.go:123] Gathering logs for kube-proxy [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c] ...
	I0520 12:57:53.901628  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:53.941072  610501 logs.go:123] Gathering logs for CRI-O ...
	I0520 12:57:53.941105  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 12:57:54.644393  610501 logs.go:123] Gathering logs for container status ...
	I0520 12:57:54.644441  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 12:57:54.695277  610501 logs.go:123] Gathering logs for kubelet ...
	I0520 12:57:54.695317  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 12:57:54.775974  610501 logs.go:123] Gathering logs for etcd [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e] ...
	I0520 12:57:54.776021  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:54.831859  610501 logs.go:123] Gathering logs for coredns [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0] ...
	I0520 12:57:54.831908  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:54.876969  610501 logs.go:123] Gathering logs for kube-scheduler [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a] ...
	I0520 12:57:54.877020  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:54.931426  610501 logs.go:123] Gathering logs for kube-controller-manager [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865] ...
	I0520 12:57:54.931472  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:57.491119  610501 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I0520 12:57:57.495836  610501 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I0520 12:57:57.497181  610501 api_server.go:141] control plane version: v1.30.1
	I0520 12:57:57.497205  610501 api_server.go:131] duration metric: took 4.071843024s to wait for apiserver health ...
	I0520 12:57:57.497214  610501 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 12:57:57.497235  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 12:57:57.497313  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 12:57:57.534814  610501 cri.go:89] found id: "9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:57.534847  610501 cri.go:89] found id: ""
	I0520 12:57:57.534857  610501 logs.go:276] 1 containers: [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34]
	I0520 12:57:57.534924  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.538897  610501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 12:57:57.538957  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 12:57:57.578468  610501 cri.go:89] found id: "10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:57.578502  610501 cri.go:89] found id: ""
	I0520 12:57:57.578511  610501 logs.go:276] 1 containers: [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e]
	I0520 12:57:57.578571  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.582910  610501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 12:57:57.582980  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 12:57:57.622272  610501 cri.go:89] found id: "7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:57.622294  610501 cri.go:89] found id: ""
	I0520 12:57:57.622303  610501 logs.go:276] 1 containers: [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0]
	I0520 12:57:57.622353  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.626295  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 12:57:57.626351  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 12:57:57.671885  610501 cri.go:89] found id: "6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:57.671910  610501 cri.go:89] found id: ""
	I0520 12:57:57.671918  610501 logs.go:276] 1 containers: [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a]
	I0520 12:57:57.671970  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.676755  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 12:57:57.676827  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 12:57:57.713995  610501 cri.go:89] found id: "a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:57.714014  610501 cri.go:89] found id: ""
	I0520 12:57:57.714023  610501 logs.go:276] 1 containers: [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c]
	I0520 12:57:57.714084  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.718184  610501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 12:57:57.718247  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 12:57:57.755752  610501 cri.go:89] found id: "6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:57.755782  610501 cri.go:89] found id: ""
	I0520 12:57:57.755793  610501 logs.go:276] 1 containers: [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865]
	I0520 12:57:57.755845  610501 ssh_runner.go:195] Run: which crictl
	I0520 12:57:57.759887  610501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 12:57:57.759953  610501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 12:57:57.796173  610501 cri.go:89] found id: ""
	I0520 12:57:57.796207  610501 logs.go:276] 0 containers: []
	W0520 12:57:57.796218  610501 logs.go:278] No container was found matching "kindnet"
	I0520 12:57:57.796230  610501 logs.go:123] Gathering logs for kube-scheduler [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a] ...
	I0520 12:57:57.796243  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a"
	I0520 12:57:57.843540  610501 logs.go:123] Gathering logs for CRI-O ...
	I0520 12:57:57.843582  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 12:57:58.695225  610501 logs.go:123] Gathering logs for kube-proxy [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c] ...
	I0520 12:57:58.695278  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c"
	I0520 12:57:58.734177  610501 logs.go:123] Gathering logs for kube-controller-manager [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865] ...
	I0520 12:57:58.734221  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865"
	I0520 12:57:58.798029  610501 logs.go:123] Gathering logs for kubelet ...
	I0520 12:57:58.798075  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 12:57:58.879582  610501 logs.go:123] Gathering logs for dmesg ...
	I0520 12:57:58.879638  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 12:57:58.894417  610501 logs.go:123] Gathering logs for describe nodes ...
	I0520 12:57:58.894467  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 12:57:59.011252  610501 logs.go:123] Gathering logs for kube-apiserver [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34] ...
	I0520 12:57:59.011297  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34"
	I0520 12:57:59.058509  610501 logs.go:123] Gathering logs for etcd [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e] ...
	I0520 12:57:59.058547  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e"
	I0520 12:57:59.120006  610501 logs.go:123] Gathering logs for coredns [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0] ...
	I0520 12:57:59.120045  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0"
	I0520 12:57:59.157503  610501 logs.go:123] Gathering logs for container status ...
	I0520 12:57:59.157537  610501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 12:58:01.712083  610501 system_pods.go:59] 18 kube-system pods found
	I0520 12:58:01.712116  610501 system_pods.go:61] "coredns-7db6d8ff4d-vp4g8" [b9838e64-b32b-489f-8944-3a29c87892a6] Running
	I0520 12:58:01.712121  610501 system_pods.go:61] "csi-hostpath-attacher-0" [382113f8-2f09-4b46-964e-9a898b8cde1a] Running
	I0520 12:58:01.712124  610501 system_pods.go:61] "csi-hostpath-resizer-0" [f25c7026-9336-4c3b-baa9-382b164e4060] Running
	I0520 12:58:01.712127  610501 system_pods.go:61] "csi-hostpathplugin-k4gtt" [1b5b12d1-1c43-4122-9b62-f05cc49ba29c] Running
	I0520 12:58:01.712130  610501 system_pods.go:61] "etcd-addons-840762" [7e4a944d-05a8-49fc-b415-b912821c0b95] Running
	I0520 12:58:01.712133  610501 system_pods.go:61] "kube-apiserver-addons-840762" [5d4315b9-e854-4790-a1ff-e2749c9a4986] Running
	I0520 12:58:01.712136  610501 system_pods.go:61] "kube-controller-manager-addons-840762" [113efbaf-3b1e-471f-99fa-700614bf583d] Running
	I0520 12:58:01.712138  610501 system_pods.go:61] "kube-ingress-dns-minikube" [c057ec77-ddf8-4ad7-9001-a7b4f48a2d00] Running
	I0520 12:58:01.712141  610501 system_pods.go:61] "kube-proxy-mpkr9" [d7a0dc50-43c6-4927-9c13-45e9104e2206] Running
	I0520 12:58:01.712144  610501 system_pods.go:61] "kube-scheduler-addons-840762" [f4f8cee3-7755-409f-86fc-c558934af287] Running
	I0520 12:58:01.712146  610501 system_pods.go:61] "metrics-server-c59844bb4-8g977" [2f766954-b3a4-4592-865f-b37297fefae7] Running
	I0520 12:58:01.712149  610501 system_pods.go:61] "nvidia-device-plugin-daemonset-w5d66" [88344eab-652a-4d9d-9f7f-171aa2936225] Running
	I0520 12:58:01.712152  610501 system_pods.go:61] "registry-jwvq5" [11f262c9-d0cf-456f-bfd1-fa66f364ffaf] Running
	I0520 12:58:01.712154  610501 system_pods.go:61] "registry-proxy-xpxjv" [ca35b86e-6424-40e0-a0d6-cbd41f0ccab0] Running
	I0520 12:58:01.712157  610501 system_pods.go:61] "snapshot-controller-745499f584-h6pwb" [09a87307-3db0-4409-a938-045a643b3019] Running
	I0520 12:58:01.712160  610501 system_pods.go:61] "snapshot-controller-745499f584-tskjh" [68e4661d-25a9-4ea9-aca7-01ab30e83701] Running
	I0520 12:58:01.712164  610501 system_pods.go:61] "storage-provisioner" [0af02429-e13b-4886-993d-0d7815e2fb69] Running
	I0520 12:58:01.712169  610501 system_pods.go:61] "tiller-deploy-6677d64bcd-9z85l" [a58791b3-4277-403d-9b31-4f938890905e] Running
	I0520 12:58:01.712174  610501 system_pods.go:74] duration metric: took 4.214955142s to wait for pod list to return data ...
	I0520 12:58:01.712182  610501 default_sa.go:34] waiting for default service account to be created ...
	I0520 12:58:01.714213  610501 default_sa.go:45] found service account: "default"
	I0520 12:58:01.714230  610501 default_sa.go:55] duration metric: took 2.042647ms for default service account to be created ...
	I0520 12:58:01.714236  610501 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 12:58:01.722252  610501 system_pods.go:86] 18 kube-system pods found
	I0520 12:58:01.722281  610501 system_pods.go:89] "coredns-7db6d8ff4d-vp4g8" [b9838e64-b32b-489f-8944-3a29c87892a6] Running
	I0520 12:58:01.722287  610501 system_pods.go:89] "csi-hostpath-attacher-0" [382113f8-2f09-4b46-964e-9a898b8cde1a] Running
	I0520 12:58:01.722291  610501 system_pods.go:89] "csi-hostpath-resizer-0" [f25c7026-9336-4c3b-baa9-382b164e4060] Running
	I0520 12:58:01.722296  610501 system_pods.go:89] "csi-hostpathplugin-k4gtt" [1b5b12d1-1c43-4122-9b62-f05cc49ba29c] Running
	I0520 12:58:01.722312  610501 system_pods.go:89] "etcd-addons-840762" [7e4a944d-05a8-49fc-b415-b912821c0b95] Running
	I0520 12:58:01.722317  610501 system_pods.go:89] "kube-apiserver-addons-840762" [5d4315b9-e854-4790-a1ff-e2749c9a4986] Running
	I0520 12:58:01.722321  610501 system_pods.go:89] "kube-controller-manager-addons-840762" [113efbaf-3b1e-471f-99fa-700614bf583d] Running
	I0520 12:58:01.722325  610501 system_pods.go:89] "kube-ingress-dns-minikube" [c057ec77-ddf8-4ad7-9001-a7b4f48a2d00] Running
	I0520 12:58:01.722329  610501 system_pods.go:89] "kube-proxy-mpkr9" [d7a0dc50-43c6-4927-9c13-45e9104e2206] Running
	I0520 12:58:01.722333  610501 system_pods.go:89] "kube-scheduler-addons-840762" [f4f8cee3-7755-409f-86fc-c558934af287] Running
	I0520 12:58:01.722340  610501 system_pods.go:89] "metrics-server-c59844bb4-8g977" [2f766954-b3a4-4592-865f-b37297fefae7] Running
	I0520 12:58:01.722344  610501 system_pods.go:89] "nvidia-device-plugin-daemonset-w5d66" [88344eab-652a-4d9d-9f7f-171aa2936225] Running
	I0520 12:58:01.722350  610501 system_pods.go:89] "registry-jwvq5" [11f262c9-d0cf-456f-bfd1-fa66f364ffaf] Running
	I0520 12:58:01.722354  610501 system_pods.go:89] "registry-proxy-xpxjv" [ca35b86e-6424-40e0-a0d6-cbd41f0ccab0] Running
	I0520 12:58:01.722360  610501 system_pods.go:89] "snapshot-controller-745499f584-h6pwb" [09a87307-3db0-4409-a938-045a643b3019] Running
	I0520 12:58:01.722364  610501 system_pods.go:89] "snapshot-controller-745499f584-tskjh" [68e4661d-25a9-4ea9-aca7-01ab30e83701] Running
	I0520 12:58:01.722370  610501 system_pods.go:89] "storage-provisioner" [0af02429-e13b-4886-993d-0d7815e2fb69] Running
	I0520 12:58:01.722376  610501 system_pods.go:89] "tiller-deploy-6677d64bcd-9z85l" [a58791b3-4277-403d-9b31-4f938890905e] Running
	I0520 12:58:01.722382  610501 system_pods.go:126] duration metric: took 8.141251ms to wait for k8s-apps to be running ...
	I0520 12:58:01.722391  610501 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 12:58:01.722435  610501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:58:01.736978  610501 system_svc.go:56] duration metric: took 14.575937ms WaitForService to wait for kubelet
	I0520 12:58:01.737014  610501 kubeadm.go:576] duration metric: took 2m18.815967987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:58:01.737035  610501 node_conditions.go:102] verifying NodePressure condition ...
	I0520 12:58:01.740116  610501 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 12:58:01.740145  610501 node_conditions.go:123] node cpu capacity is 2
	I0520 12:58:01.740159  610501 node_conditions.go:105] duration metric: took 3.120029ms to run NodePressure ...
	I0520 12:58:01.740172  610501 start.go:240] waiting for startup goroutines ...
	I0520 12:58:01.740179  610501 start.go:245] waiting for cluster config update ...
	I0520 12:58:01.740195  610501 start.go:254] writing updated cluster config ...
	I0520 12:58:01.740485  610501 ssh_runner.go:195] Run: rm -f paused
	I0520 12:58:01.793273  610501 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 12:58:01.796159  610501 out.go:177] * Done! kubectl is now configured to use "addons-840762" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.533090757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c70e006987f007c0ebbf75cdbc5176502f99204ab9de66bcd4b2faf6c7d78507,PodSandboxId:fe89ff446540b375ad9a8700890a0ee7abe5caea2c47f19686fac6f8d3d12784,Metadata:&ContainerMetadata{Name:gadget,Attempt:3,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a8279285b9649c62230b8c882ba99e644d3fe8922fb14b53692633322555df8,State:CONTAINER_EXITED,CreatedAt:1716209868967532259,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-4r2zg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 20112e09-b29e-4ddb-96ef-4d06088304a4,},Annotations:map[string]string{io.kubernetes.container.hash: ce96a3ac,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c2248c8721ce621dbbebd3943541aeb80cfc7ea9777629d68f7717d25fce85,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1716209833371876082,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05
cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8dd815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78efe68071e456236ccbe20825a98b2ccf36c198b7717633bb23f80d22dc230d,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1716209831862590521,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: 83c176ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ed1c65566577462853aa665236332c0b238d2b94daada58c54cf0a99aa7cf97,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1716209830185771840,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: c359bfa5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135a96f190c99f5e260335c022f15f4c6f9bbea70184e9e733be8cff9027ea7c,PodSandboxId:6684887cb09aaf19d350757b7738b384a8240762fa9f58af2e34005a4a40b9b8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716209829223861781,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-cjjrn,io.kube
rnetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6847135c-da26-4866-92c6-81b6e53be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: cef7ac2c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aae45bf683263f56fca6142ef9de76b4365eaa07b0154cfbbba314a3b7753021,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1716209825178707176,Labels:map[string]strin
g{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: 41f2cb24,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ea9ad625840decc8395ef9b988e2680113a729cdaa8419dbe5270a6ad23cc0,PodSandboxId:820ef6d8fc1bcb391b65a6d9c531222e22002de91963d50d1741dcb8e0567d60,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:959e313aceec9f38e18a329ca3756402959e84e63ae8ba7ac1ee48aec28d51b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee54966f3891d75b255d1
60236368a4f9d3b588d32fb44bd04aea5101143e829,State:CONTAINER_RUNNING,CreatedAt:1716209823678776645,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-768f948f8f-5dgl9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 156342ec-3be6-4be5-9629-f89ca1ee418b,},Annotations:map[string]string{io.kubernetes.container.hash: a834ef75,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:294fefcae917ceff801b2db1344ff11b2afffafb36725bde9
4632c5a21650761,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1716209816791806131,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: 2652405,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:9404eead981109cb81c815d2841808a860c38c7e92f3fbe72013e0b1237b27f2,PodSandboxId:e20fe1e9698421d0c6eac0eec40e1200566de57623583d56011a83c2dfee1eee,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1716209814543656128,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25c7026-9336-4c3b-baa9-382b164e4060,},Annotations:map[string]string{io.kubernetes.container.hash: 6915116a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:b679158bddae857b6f9d878a367a7bd60b651dfc8771c1d4467ca3c5bf5c4f90,PodSandboxId:7ba9cfbccb5b8a2ba19ebbd95781bd9319b68e67a5cbfeabe36ae9570954aeac,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1716209812945733911,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 382113f8-2f09-4b46-964e-9a898b8cde1a,},Annotations:map[string]string{io.kubernetes.container.hash: a394d6ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kube
rnetes.pod.terminationGracePeriod: 30,},},&Container{Id:c39c547833242bb1fd08e856aa714f70f5be0bb6bd8ad959e046e7ddfcd296fe,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1716209811347394338,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: dbc3ddf2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:120b275a98f0ebf77b544df46ac2d36888943e39473a2da39018759b246c07df,PodSandboxId:18e6d4697d58e9344944fc76df9d677a3adf550e7afe011bf2ba0b9c5385bf87,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716209810689078572,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xpvg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a441bce-20c7-4f19-b940-4cb826784cea,},Annotations:map[string]string{io.kubernetes.container.hash: 9d906670,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b7f95bfe63215f39e44f4013e882d6e05871185c5bc7ecac61c506f49452d1,PodSandboxId:83f3bee4f32fc75bfa47e3638164f0186d5672c5607b70e50700d0bf0acee69e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716209809701371022,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lglgg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 901789e2-d702-40f6-a420-a1d24db58a4e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a24b05e,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ab732368a0ffec347262b8621bd3b3879fbd26fe165d469dffeebf885555dd,PodSandboxId:a64ee9ae2d3a4a2ed85e1751b186dc639427c0f780c6b33401156ca139483a64,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1716209806914532353,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-745499f584-h6pwb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a87307-3db0-4409-a938-045a643b3019,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: a6da34b2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c84fb970a19c036d7a7d5e0cb2fb0b46afa067ffd09c78ebfe4308924e8c39,PodSandboxId:fd5223edc2a06721d335240eca78708a58d6c8f6a3c658b2220da3fdbb9604c0,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1716209806817939662,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-745499f584-tskjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68e4661d-25
a9-4ea9-aca7-01ab30e83701,},Annotations:map[string]string{io.kubernetes.container.hash: cd700aef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4982733f8de54268cd4294445654740e9af1f2febdee6f9c8d202168760f27b5,PodSandboxId:cb0fcb830dafd747b39d56de4cd91939adc16fe73a513c1f8461607312bfba78,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1716209805096487931,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-7zmth,io.kubernetes.
pod.namespace: local-path-storage,io.kubernetes.pod.uid: d97b9139-a67b-45d8-b87a-b9467f460e15,},Annotations:map[string]string{io.kubernetes.container.hash: 2a31a520,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c,PodSandboxId:354aac86fd4a4665bf8afc7fe6bd167e24d314d8f0d13397a2aeebfe19cfd9df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716209801955365316,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.na
me: metrics-server-c59844bb4-8g977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f766954-b3a4-4592-865f-b37297fefae7,},Annotations:map[string]string{io.kubernetes.container.hash: feff7df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5640739ae135dbb8daed59ce1e4f39a4905cdd7c978310c3867d5cc70d02a655,PodSandboxId:47557659a5b0aed467988a3d8f68654d6895db6c19735e9a9f4609c5d26f8688,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171
6209799507008718,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-hgp7b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 98ccbc95-97f1-48f6-99a4-6c335bd4b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 21e85f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac2850631eedfb7f192dac4fd6d9c59b7cfa2de142746055ffb9006b09d9df40,PodSandboxId:a6334b74e1b452893ef202939149c328780f2f317314b5c32fd2510c0d9b5d1b,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39
089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_RUNNING,CreatedAt:1716209782172283998,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-6677d64bcd-9z85l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a58791b3-4277-403d-9b31-4f938890905e,},Annotations:map[string]string{io.kubernetes.container.hash: fe90e4ff,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fcce271acb34244cd7f2a77899ca2646459cb60e24733226b032214322c3ba,PodSandboxId:b221456a6d2ca4745db530432ec94e569009cf09fd601a90163006e36d4d4d57,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{
Image:gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ff6c6518681d96dab31ba95ce65298861a2e2e2d5b2afbb168e6da22563c13d,State:CONTAINER_RUNNING,CreatedAt:1716209775813947254,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6fcd4f6f98-tzksc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 14c3ddef-1fef-49b7-84cc-6d33520ba034,},Annotations:map[string]string{io.kubernetes.container.hash: 6f3fb5ec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d486f81c5c6
a791dc681203b9599c0c88d2b09dba9529ee98c1844379956549f,PodSandboxId:238da6830db9a2368fabac6ea0dfd481c8ded1607f30cfa736627a52807a32a9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_RUNNING,CreatedAt:1716209758862842017,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c057ec77-ddf8-4ad7-9001-a7b4f48a2d00,},Annotations:map[string]string{io.kubernetes.container.hash: 8f84b6fb,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e66ec7f2ae77116414c457358da2b30db464a9cef54c6d94abb3330231e0638,PodSandboxId:345c392f5452dbde2b2794b633b10f9f223744240388bd1a1382a7f432edbde9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209748574401075,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0af02429-e13b-4886-993d-0d7815e2fb69,},Annotations:map[string]string{io.kubernetes.container.hash: a6b6c5f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0,PodSandboxId:79eaca6d020368574c107991ba037d147a5e6759ce595522b190b3208048e31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209745620204527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vp4g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9838e64-b32b-489f-8944-3a29c87892a6,},Annotations:map[string]string{io.kubernetes.container.hash: 49e4769a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protoc
ol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c,PodSandboxId:e6b145e6b7a46c742fc5409c92716b9ead3d1a007fe901777100e3e904d089f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209742892346346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpkr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a0dc50-43c6-4927-9c13-45e9104e2206,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: e884271,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e,PodSandboxId:a496785b5b5f53d70bf2eae6408569665774f695bf07f2aeec756a75513b9a74,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209724111327099,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc20fa6c7f57dfba2ef2611768216c5c,},Annotations:map[string]string{io.kubernetes.container.hash: e0715a78
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a,PodSandboxId:31de9fbe23d9ba41772d69015e4518cb70c8cdc7386fbac20bac4b64809284c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209724041565547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4bb06d2b47c119024d856c02f66b4d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.co
ntainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865,PodSandboxId:ef74fd5cfc67f15c584b795a04fe940f1036e1a37553be890a44cabfd410c9f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209724027576770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb210768643f1d2a3f5e71d39e6100ee,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kub
ernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34,PodSandboxId:d591c03b18dc1366ee18164ecc1c47eeaa8739d08ea95b335348aa29b028f291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209723992074280,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25de9c545ad63a4181a22d9d16ed13c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1f6497a4,io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0552ee2-fd9e-4eb3-a8fd-f3ca4fbdd4a1 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.549620472Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a7a11a6-fb26-4f4b-94d6-7ded21523e85 name=/runtime.v1.RuntimeService/Version
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.549686519Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a7a11a6-fb26-4f4b-94d6-7ded21523e85 name=/runtime.v1.RuntimeService/Version
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.550950622Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ecab54a-80d4-4b02-b478-ab6888bc2078 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.552285500Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209910552259564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:501529,},InodesUsed:&UInt64Value{Value:180,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ecab54a-80d4-4b02-b478-ab6888bc2078 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.552793309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ccff6a43-84a1-4288-a2ac-3cf87f30291d name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.552863735Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ccff6a43-84a1-4288-a2ac-3cf87f30291d name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.553545808Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c70e006987f007c0ebbf75cdbc5176502f99204ab9de66bcd4b2faf6c7d78507,PodSandboxId:fe89ff446540b375ad9a8700890a0ee7abe5caea2c47f19686fac6f8d3d12784,Metadata:&ContainerMetadata{Name:gadget,Attempt:3,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a8279285b9649c62230b8c882ba99e644d3fe8922fb14b53692633322555df8,State:CONTAINER_EXITED,CreatedAt:1716209868967532259,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-4r2zg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 20112e09-b29e-4ddb-96ef-4d06088304a4,},Annotations:map[string]string{io.kubernetes.container.hash: ce96a3ac,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c2248c8721ce621dbbebd3943541aeb80cfc7ea9777629d68f7717d25fce85,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1716209833371876082,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05
cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8dd815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78efe68071e456236ccbe20825a98b2ccf36c198b7717633bb23f80d22dc230d,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1716209831862590521,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: 83c176ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ed1c65566577462853aa665236332c0b238d2b94daada58c54cf0a99aa7cf97,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1716209830185771840,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: c359bfa5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135a96f190c99f5e260335c022f15f4c6f9bbea70184e9e733be8cff9027ea7c,PodSandboxId:6684887cb09aaf19d350757b7738b384a8240762fa9f58af2e34005a4a40b9b8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716209829223861781,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-cjjrn,io.kube
rnetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6847135c-da26-4866-92c6-81b6e53be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: cef7ac2c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aae45bf683263f56fca6142ef9de76b4365eaa07b0154cfbbba314a3b7753021,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1716209825178707176,Labels:map[string]strin
g{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: 41f2cb24,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ea9ad625840decc8395ef9b988e2680113a729cdaa8419dbe5270a6ad23cc0,PodSandboxId:820ef6d8fc1bcb391b65a6d9c531222e22002de91963d50d1741dcb8e0567d60,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:959e313aceec9f38e18a329ca3756402959e84e63ae8ba7ac1ee48aec28d51b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee54966f3891d75b255d1
60236368a4f9d3b588d32fb44bd04aea5101143e829,State:CONTAINER_RUNNING,CreatedAt:1716209823678776645,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-768f948f8f-5dgl9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 156342ec-3be6-4be5-9629-f89ca1ee418b,},Annotations:map[string]string{io.kubernetes.container.hash: a834ef75,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:294fefcae917ceff801b2db1344ff11b2afffafb36725bde9
4632c5a21650761,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1716209816791806131,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: 2652405,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:9404eead981109cb81c815d2841808a860c38c7e92f3fbe72013e0b1237b27f2,PodSandboxId:e20fe1e9698421d0c6eac0eec40e1200566de57623583d56011a83c2dfee1eee,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1716209814543656128,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25c7026-9336-4c3b-baa9-382b164e4060,},Annotations:map[string]string{io.kubernetes.container.hash: 6915116a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:b679158bddae857b6f9d878a367a7bd60b651dfc8771c1d4467ca3c5bf5c4f90,PodSandboxId:7ba9cfbccb5b8a2ba19ebbd95781bd9319b68e67a5cbfeabe36ae9570954aeac,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1716209812945733911,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 382113f8-2f09-4b46-964e-9a898b8cde1a,},Annotations:map[string]string{io.kubernetes.container.hash: a394d6ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kube
rnetes.pod.terminationGracePeriod: 30,},},&Container{Id:c39c547833242bb1fd08e856aa714f70f5be0bb6bd8ad959e046e7ddfcd296fe,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1716209811347394338,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: dbc3ddf2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:120b275a98f0ebf77b544df46ac2d36888943e39473a2da39018759b246c07df,PodSandboxId:18e6d4697d58e9344944fc76df9d677a3adf550e7afe011bf2ba0b9c5385bf87,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716209810689078572,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xpvg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a441bce-20c7-4f19-b940-4cb826784cea,},Annotations:map[string]string{io.kubernetes.container.hash: 9d906670,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b7f95bfe63215f39e44f4013e882d6e05871185c5bc7ecac61c506f49452d1,PodSandboxId:83f3bee4f32fc75bfa47e3638164f0186d5672c5607b70e50700d0bf0acee69e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716209809701371022,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lglgg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 901789e2-d702-40f6-a420-a1d24db58a4e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a24b05e,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ab732368a0ffec347262b8621bd3b3879fbd26fe165d469dffeebf885555dd,PodSandboxId:a64ee9ae2d3a4a2ed85e1751b186dc639427c0f780c6b33401156ca139483a64,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1716209806914532353,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-745499f584-h6pwb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a87307-3db0-4409-a938-045a643b3019,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: a6da34b2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c84fb970a19c036d7a7d5e0cb2fb0b46afa067ffd09c78ebfe4308924e8c39,PodSandboxId:fd5223edc2a06721d335240eca78708a58d6c8f6a3c658b2220da3fdbb9604c0,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1716209806817939662,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-745499f584-tskjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68e4661d-25
a9-4ea9-aca7-01ab30e83701,},Annotations:map[string]string{io.kubernetes.container.hash: cd700aef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4982733f8de54268cd4294445654740e9af1f2febdee6f9c8d202168760f27b5,PodSandboxId:cb0fcb830dafd747b39d56de4cd91939adc16fe73a513c1f8461607312bfba78,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1716209805096487931,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-7zmth,io.kubernetes.
pod.namespace: local-path-storage,io.kubernetes.pod.uid: d97b9139-a67b-45d8-b87a-b9467f460e15,},Annotations:map[string]string{io.kubernetes.container.hash: 2a31a520,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c,PodSandboxId:354aac86fd4a4665bf8afc7fe6bd167e24d314d8f0d13397a2aeebfe19cfd9df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716209801955365316,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.na
me: metrics-server-c59844bb4-8g977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f766954-b3a4-4592-865f-b37297fefae7,},Annotations:map[string]string{io.kubernetes.container.hash: feff7df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5640739ae135dbb8daed59ce1e4f39a4905cdd7c978310c3867d5cc70d02a655,PodSandboxId:47557659a5b0aed467988a3d8f68654d6895db6c19735e9a9f4609c5d26f8688,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171
6209799507008718,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-hgp7b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 98ccbc95-97f1-48f6-99a4-6c335bd4b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 21e85f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac2850631eedfb7f192dac4fd6d9c59b7cfa2de142746055ffb9006b09d9df40,PodSandboxId:a6334b74e1b452893ef202939149c328780f2f317314b5c32fd2510c0d9b5d1b,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39
089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_RUNNING,CreatedAt:1716209782172283998,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-6677d64bcd-9z85l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a58791b3-4277-403d-9b31-4f938890905e,},Annotations:map[string]string{io.kubernetes.container.hash: fe90e4ff,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fcce271acb34244cd7f2a77899ca2646459cb60e24733226b032214322c3ba,PodSandboxId:b221456a6d2ca4745db530432ec94e569009cf09fd601a90163006e36d4d4d57,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{
Image:gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ff6c6518681d96dab31ba95ce65298861a2e2e2d5b2afbb168e6da22563c13d,State:CONTAINER_RUNNING,CreatedAt:1716209775813947254,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6fcd4f6f98-tzksc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 14c3ddef-1fef-49b7-84cc-6d33520ba034,},Annotations:map[string]string{io.kubernetes.container.hash: 6f3fb5ec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d486f81c5c6
a791dc681203b9599c0c88d2b09dba9529ee98c1844379956549f,PodSandboxId:238da6830db9a2368fabac6ea0dfd481c8ded1607f30cfa736627a52807a32a9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_RUNNING,CreatedAt:1716209758862842017,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c057ec77-ddf8-4ad7-9001-a7b4f48a2d00,},Annotations:map[string]string{io.kubernetes.container.hash: 8f84b6fb,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e66ec7f2ae77116414c457358da2b30db464a9cef54c6d94abb3330231e0638,PodSandboxId:345c392f5452dbde2b2794b633b10f9f223744240388bd1a1382a7f432edbde9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209748574401075,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0af02429-e13b-4886-993d-0d7815e2fb69,},Annotations:map[string]string{io.kubernetes.container.hash: a6b6c5f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0,PodSandboxId:79eaca6d020368574c107991ba037d147a5e6759ce595522b190b3208048e31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209745620204527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vp4g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9838e64-b32b-489f-8944-3a29c87892a6,},Annotations:map[string]string{io.kubernetes.container.hash: 49e4769a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protoc
ol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c,PodSandboxId:e6b145e6b7a46c742fc5409c92716b9ead3d1a007fe901777100e3e904d089f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209742892346346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpkr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a0dc50-43c6-4927-9c13-45e9104e2206,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: e884271,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e,PodSandboxId:a496785b5b5f53d70bf2eae6408569665774f695bf07f2aeec756a75513b9a74,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209724111327099,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc20fa6c7f57dfba2ef2611768216c5c,},Annotations:map[string]string{io.kubernetes.container.hash: e0715a78
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a,PodSandboxId:31de9fbe23d9ba41772d69015e4518cb70c8cdc7386fbac20bac4b64809284c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209724041565547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4bb06d2b47c119024d856c02f66b4d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.co
ntainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865,PodSandboxId:ef74fd5cfc67f15c584b795a04fe940f1036e1a37553be890a44cabfd410c9f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209724027576770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb210768643f1d2a3f5e71d39e6100ee,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kub
ernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34,PodSandboxId:d591c03b18dc1366ee18164ecc1c47eeaa8739d08ea95b335348aa29b028f291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209723992074280,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25de9c545ad63a4181a22d9d16ed13c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1f6497a4,io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ccff6a43-84a1-4288-a2ac-3cf87f30291d name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.586095673Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c59a871e-9e64-4805-83b3-b7c690308a35 name=/runtime.v1.RuntimeService/Version
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.586237875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c59a871e-9e64-4805-83b3-b7c690308a35 name=/runtime.v1.RuntimeService/Version
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.587608641Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06fa4dd2-4162-4712-a82d-44236c154b75 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.588829240Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209910588803647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:501529,},InodesUsed:&UInt64Value{Value:180,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06fa4dd2-4162-4712-a82d-44236c154b75 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.589414752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7091c7c-c525-4137-9325-f8381cd621dd name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.589480410Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7091c7c-c525-4137-9325-f8381cd621dd name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.590303997Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c70e006987f007c0ebbf75cdbc5176502f99204ab9de66bcd4b2faf6c7d78507,PodSandboxId:fe89ff446540b375ad9a8700890a0ee7abe5caea2c47f19686fac6f8d3d12784,Metadata:&ContainerMetadata{Name:gadget,Attempt:3,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a8279285b9649c62230b8c882ba99e644d3fe8922fb14b53692633322555df8,State:CONTAINER_EXITED,CreatedAt:1716209868967532259,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-4r2zg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 20112e09-b29e-4ddb-96ef-4d06088304a4,},Annotations:map[string]string{io.kubernetes.container.hash: ce96a3ac,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c2248c8721ce621dbbebd3943541aeb80cfc7ea9777629d68f7717d25fce85,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1716209833371876082,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05
cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8dd815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78efe68071e456236ccbe20825a98b2ccf36c198b7717633bb23f80d22dc230d,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1716209831862590521,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: 83c176ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ed1c65566577462853aa665236332c0b238d2b94daada58c54cf0a99aa7cf97,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1716209830185771840,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: c359bfa5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135a96f190c99f5e260335c022f15f4c6f9bbea70184e9e733be8cff9027ea7c,PodSandboxId:6684887cb09aaf19d350757b7738b384a8240762fa9f58af2e34005a4a40b9b8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716209829223861781,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-cjjrn,io.kube
rnetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6847135c-da26-4866-92c6-81b6e53be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: cef7ac2c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aae45bf683263f56fca6142ef9de76b4365eaa07b0154cfbbba314a3b7753021,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1716209825178707176,Labels:map[string]strin
g{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: 41f2cb24,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ea9ad625840decc8395ef9b988e2680113a729cdaa8419dbe5270a6ad23cc0,PodSandboxId:820ef6d8fc1bcb391b65a6d9c531222e22002de91963d50d1741dcb8e0567d60,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:959e313aceec9f38e18a329ca3756402959e84e63ae8ba7ac1ee48aec28d51b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee54966f3891d75b255d1
60236368a4f9d3b588d32fb44bd04aea5101143e829,State:CONTAINER_RUNNING,CreatedAt:1716209823678776645,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-768f948f8f-5dgl9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 156342ec-3be6-4be5-9629-f89ca1ee418b,},Annotations:map[string]string{io.kubernetes.container.hash: a834ef75,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:294fefcae917ceff801b2db1344ff11b2afffafb36725bde9
4632c5a21650761,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1716209816791806131,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: 2652405,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:9404eead981109cb81c815d2841808a860c38c7e92f3fbe72013e0b1237b27f2,PodSandboxId:e20fe1e9698421d0c6eac0eec40e1200566de57623583d56011a83c2dfee1eee,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1716209814543656128,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25c7026-9336-4c3b-baa9-382b164e4060,},Annotations:map[string]string{io.kubernetes.container.hash: 6915116a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:b679158bddae857b6f9d878a367a7bd60b651dfc8771c1d4467ca3c5bf5c4f90,PodSandboxId:7ba9cfbccb5b8a2ba19ebbd95781bd9319b68e67a5cbfeabe36ae9570954aeac,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1716209812945733911,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 382113f8-2f09-4b46-964e-9a898b8cde1a,},Annotations:map[string]string{io.kubernetes.container.hash: a394d6ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kube
rnetes.pod.terminationGracePeriod: 30,},},&Container{Id:c39c547833242bb1fd08e856aa714f70f5be0bb6bd8ad959e046e7ddfcd296fe,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1716209811347394338,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: dbc3ddf2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:120b275a98f0ebf77b544df46ac2d36888943e39473a2da39018759b246c07df,PodSandboxId:18e6d4697d58e9344944fc76df9d677a3adf550e7afe011bf2ba0b9c5385bf87,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716209810689078572,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xpvg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a441bce-20c7-4f19-b940-4cb826784cea,},Annotations:map[string]string{io.kubernetes.container.hash: 9d906670,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b7f95bfe63215f39e44f4013e882d6e05871185c5bc7ecac61c506f49452d1,PodSandboxId:83f3bee4f32fc75bfa47e3638164f0186d5672c5607b70e50700d0bf0acee69e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716209809701371022,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lglgg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 901789e2-d702-40f6-a420-a1d24db58a4e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a24b05e,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ab732368a0ffec347262b8621bd3b3879fbd26fe165d469dffeebf885555dd,PodSandboxId:a64ee9ae2d3a4a2ed85e1751b186dc639427c0f780c6b33401156ca139483a64,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1716209806914532353,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-745499f584-h6pwb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a87307-3db0-4409-a938-045a643b3019,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: a6da34b2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c84fb970a19c036d7a7d5e0cb2fb0b46afa067ffd09c78ebfe4308924e8c39,PodSandboxId:fd5223edc2a06721d335240eca78708a58d6c8f6a3c658b2220da3fdbb9604c0,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1716209806817939662,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-745499f584-tskjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68e4661d-25
a9-4ea9-aca7-01ab30e83701,},Annotations:map[string]string{io.kubernetes.container.hash: cd700aef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4982733f8de54268cd4294445654740e9af1f2febdee6f9c8d202168760f27b5,PodSandboxId:cb0fcb830dafd747b39d56de4cd91939adc16fe73a513c1f8461607312bfba78,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1716209805096487931,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-7zmth,io.kubernetes.
pod.namespace: local-path-storage,io.kubernetes.pod.uid: d97b9139-a67b-45d8-b87a-b9467f460e15,},Annotations:map[string]string{io.kubernetes.container.hash: 2a31a520,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c,PodSandboxId:354aac86fd4a4665bf8afc7fe6bd167e24d314d8f0d13397a2aeebfe19cfd9df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716209801955365316,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.na
me: metrics-server-c59844bb4-8g977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f766954-b3a4-4592-865f-b37297fefae7,},Annotations:map[string]string{io.kubernetes.container.hash: feff7df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5640739ae135dbb8daed59ce1e4f39a4905cdd7c978310c3867d5cc70d02a655,PodSandboxId:47557659a5b0aed467988a3d8f68654d6895db6c19735e9a9f4609c5d26f8688,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171
6209799507008718,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-hgp7b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 98ccbc95-97f1-48f6-99a4-6c335bd4b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 21e85f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac2850631eedfb7f192dac4fd6d9c59b7cfa2de142746055ffb9006b09d9df40,PodSandboxId:a6334b74e1b452893ef202939149c328780f2f317314b5c32fd2510c0d9b5d1b,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39
089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_RUNNING,CreatedAt:1716209782172283998,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-6677d64bcd-9z85l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a58791b3-4277-403d-9b31-4f938890905e,},Annotations:map[string]string{io.kubernetes.container.hash: fe90e4ff,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fcce271acb34244cd7f2a77899ca2646459cb60e24733226b032214322c3ba,PodSandboxId:b221456a6d2ca4745db530432ec94e569009cf09fd601a90163006e36d4d4d57,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{
Image:gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ff6c6518681d96dab31ba95ce65298861a2e2e2d5b2afbb168e6da22563c13d,State:CONTAINER_RUNNING,CreatedAt:1716209775813947254,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6fcd4f6f98-tzksc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 14c3ddef-1fef-49b7-84cc-6d33520ba034,},Annotations:map[string]string{io.kubernetes.container.hash: 6f3fb5ec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d486f81c5c6
a791dc681203b9599c0c88d2b09dba9529ee98c1844379956549f,PodSandboxId:238da6830db9a2368fabac6ea0dfd481c8ded1607f30cfa736627a52807a32a9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_RUNNING,CreatedAt:1716209758862842017,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c057ec77-ddf8-4ad7-9001-a7b4f48a2d00,},Annotations:map[string]string{io.kubernetes.container.hash: 8f84b6fb,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e66ec7f2ae77116414c457358da2b30db464a9cef54c6d94abb3330231e0638,PodSandboxId:345c392f5452dbde2b2794b633b10f9f223744240388bd1a1382a7f432edbde9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209748574401075,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0af02429-e13b-4886-993d-0d7815e2fb69,},Annotations:map[string]string{io.kubernetes.container.hash: a6b6c5f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0,PodSandboxId:79eaca6d020368574c107991ba037d147a5e6759ce595522b190b3208048e31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209745620204527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vp4g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9838e64-b32b-489f-8944-3a29c87892a6,},Annotations:map[string]string{io.kubernetes.container.hash: 49e4769a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protoc
ol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c,PodSandboxId:e6b145e6b7a46c742fc5409c92716b9ead3d1a007fe901777100e3e904d089f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209742892346346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpkr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a0dc50-43c6-4927-9c13-45e9104e2206,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: e884271,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e,PodSandboxId:a496785b5b5f53d70bf2eae6408569665774f695bf07f2aeec756a75513b9a74,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209724111327099,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc20fa6c7f57dfba2ef2611768216c5c,},Annotations:map[string]string{io.kubernetes.container.hash: e0715a78
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a,PodSandboxId:31de9fbe23d9ba41772d69015e4518cb70c8cdc7386fbac20bac4b64809284c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209724041565547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4bb06d2b47c119024d856c02f66b4d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.co
ntainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865,PodSandboxId:ef74fd5cfc67f15c584b795a04fe940f1036e1a37553be890a44cabfd410c9f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209724027576770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb210768643f1d2a3f5e71d39e6100ee,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kub
ernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34,PodSandboxId:d591c03b18dc1366ee18164ecc1c47eeaa8739d08ea95b335348aa29b028f291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209723992074280,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25de9c545ad63a4181a22d9d16ed13c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1f6497a4,io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7091c7c-c525-4137-9325-f8381cd621dd name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.627000136Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=564fbce5-c954-40c1-8e6a-e53f38e957a5 name=/runtime.v1.RuntimeService/Version
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.627083250Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=564fbce5-c954-40c1-8e6a-e53f38e957a5 name=/runtime.v1.RuntimeService/Version
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.628069730Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e51cc31-af9c-42d3-a865-0eb7627be94f name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.629191442Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209910629167743,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:501529,},InodesUsed:&UInt64Value{Value:180,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e51cc31-af9c-42d3-a865-0eb7627be94f name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.629697936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88107d8e-6d4f-4dec-8dd5-5d3d45a75dd2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.629766588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88107d8e-6d4f-4dec-8dd5-5d3d45a75dd2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.631059766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c70e006987f007c0ebbf75cdbc5176502f99204ab9de66bcd4b2faf6c7d78507,PodSandboxId:fe89ff446540b375ad9a8700890a0ee7abe5caea2c47f19686fac6f8d3d12784,Metadata:&ContainerMetadata{Name:gadget,Attempt:3,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a8279285b9649c62230b8c882ba99e644d3fe8922fb14b53692633322555df8,State:CONTAINER_EXITED,CreatedAt:1716209868967532259,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-4r2zg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 20112e09-b29e-4ddb-96ef-4d06088304a4,},Annotations:map[string]string{io.kubernetes.container.hash: ce96a3ac,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c2248c8721ce621dbbebd3943541aeb80cfc7ea9777629d68f7717d25fce85,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1716209833371876082,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05
cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8dd815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78efe68071e456236ccbe20825a98b2ccf36c198b7717633bb23f80d22dc230d,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1716209831862590521,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: 83c176ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ed1c65566577462853aa665236332c0b238d2b94daada58c54cf0a99aa7cf97,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1716209830185771840,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: c359bfa5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135a96f190c99f5e260335c022f15f4c6f9bbea70184e9e733be8cff9027ea7c,PodSandboxId:6684887cb09aaf19d350757b7738b384a8240762fa9f58af2e34005a4a40b9b8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716209829223861781,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-cjjrn,io.kube
rnetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6847135c-da26-4866-92c6-81b6e53be1a8,},Annotations:map[string]string{io.kubernetes.container.hash: cef7ac2c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aae45bf683263f56fca6142ef9de76b4365eaa07b0154cfbbba314a3b7753021,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1716209825178707176,Labels:map[string]strin
g{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: 41f2cb24,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ea9ad625840decc8395ef9b988e2680113a729cdaa8419dbe5270a6ad23cc0,PodSandboxId:820ef6d8fc1bcb391b65a6d9c531222e22002de91963d50d1741dcb8e0567d60,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:959e313aceec9f38e18a329ca3756402959e84e63ae8ba7ac1ee48aec28d51b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee54966f3891d75b255d1
60236368a4f9d3b588d32fb44bd04aea5101143e829,State:CONTAINER_RUNNING,CreatedAt:1716209823678776645,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-768f948f8f-5dgl9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 156342ec-3be6-4be5-9629-f89ca1ee418b,},Annotations:map[string]string{io.kubernetes.container.hash: a834ef75,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:294fefcae917ceff801b2db1344ff11b2afffafb36725bde9
4632c5a21650761,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1716209816791806131,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: 2652405,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:9404eead981109cb81c815d2841808a860c38c7e92f3fbe72013e0b1237b27f2,PodSandboxId:e20fe1e9698421d0c6eac0eec40e1200566de57623583d56011a83c2dfee1eee,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1716209814543656128,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25c7026-9336-4c3b-baa9-382b164e4060,},Annotations:map[string]string{io.kubernetes.container.hash: 6915116a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:b679158bddae857b6f9d878a367a7bd60b651dfc8771c1d4467ca3c5bf5c4f90,PodSandboxId:7ba9cfbccb5b8a2ba19ebbd95781bd9319b68e67a5cbfeabe36ae9570954aeac,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1716209812945733911,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 382113f8-2f09-4b46-964e-9a898b8cde1a,},Annotations:map[string]string{io.kubernetes.container.hash: a394d6ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kube
rnetes.pod.terminationGracePeriod: 30,},},&Container{Id:c39c547833242bb1fd08e856aa714f70f5be0bb6bd8ad959e046e7ddfcd296fe,PodSandboxId:fbe68ca3b46fc4bd4cea2109c6fe58327a43501f83317893fa0485fe9e2986da,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1716209811347394338,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-k4gtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b12d1-1c43-4122-9b62-f05cc49ba29c,},Annotations:map[string]string{io.kubernetes.container.hash: dbc3ddf2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:120b275a98f0ebf77b544df46ac2d36888943e39473a2da39018759b246c07df,PodSandboxId:18e6d4697d58e9344944fc76df9d677a3adf550e7afe011bf2ba0b9c5385bf87,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716209810689078572,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xpvg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a441bce-20c7-4f19-b940-4cb826784cea,},Annotations:map[string]string{io.kubernetes.container.hash: 9d906670,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b7f95bfe63215f39e44f4013e882d6e05871185c5bc7ecac61c506f49452d1,PodSandboxId:83f3bee4f32fc75bfa47e3638164f0186d5672c5607b70e50700d0bf0acee69e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716209809701371022,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lglgg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 901789e2-d702-40f6-a420-a1d24db58a4e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a24b05e,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ab732368a0ffec347262b8621bd3b3879fbd26fe165d469dffeebf885555dd,PodSandboxId:a64ee9ae2d3a4a2ed85e1751b186dc639427c0f780c6b33401156ca139483a64,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1716209806914532353,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-745499f584-h6pwb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a87307-3db0-4409-a938-045a643b3019,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: a6da34b2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c84fb970a19c036d7a7d5e0cb2fb0b46afa067ffd09c78ebfe4308924e8c39,PodSandboxId:fd5223edc2a06721d335240eca78708a58d6c8f6a3c658b2220da3fdbb9604c0,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1716209806817939662,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-745499f584-tskjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68e4661d-25
a9-4ea9-aca7-01ab30e83701,},Annotations:map[string]string{io.kubernetes.container.hash: cd700aef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4982733f8de54268cd4294445654740e9af1f2febdee6f9c8d202168760f27b5,PodSandboxId:cb0fcb830dafd747b39d56de4cd91939adc16fe73a513c1f8461607312bfba78,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1716209805096487931,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-7zmth,io.kubernetes.
pod.namespace: local-path-storage,io.kubernetes.pod.uid: d97b9139-a67b-45d8-b87a-b9467f460e15,},Annotations:map[string]string{io.kubernetes.container.hash: 2a31a520,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9db02ffacd425f854f08dde109b60e0025ff9b8a304a2a145568f9e95e454c,PodSandboxId:354aac86fd4a4665bf8afc7fe6bd167e24d314d8f0d13397a2aeebfe19cfd9df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716209801955365316,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.na
me: metrics-server-c59844bb4-8g977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f766954-b3a4-4592-865f-b37297fefae7,},Annotations:map[string]string{io.kubernetes.container.hash: feff7df,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5640739ae135dbb8daed59ce1e4f39a4905cdd7c978310c3867d5cc70d02a655,PodSandboxId:47557659a5b0aed467988a3d8f68654d6895db6c19735e9a9f4609c5d26f8688,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171
6209799507008718,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-hgp7b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 98ccbc95-97f1-48f6-99a4-6c335bd4b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 21e85f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac2850631eedfb7f192dac4fd6d9c59b7cfa2de142746055ffb9006b09d9df40,PodSandboxId:a6334b74e1b452893ef202939149c328780f2f317314b5c32fd2510c0d9b5d1b,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39
089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_RUNNING,CreatedAt:1716209782172283998,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-6677d64bcd-9z85l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a58791b3-4277-403d-9b31-4f938890905e,},Annotations:map[string]string{io.kubernetes.container.hash: fe90e4ff,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fcce271acb34244cd7f2a77899ca2646459cb60e24733226b032214322c3ba,PodSandboxId:b221456a6d2ca4745db530432ec94e569009cf09fd601a90163006e36d4d4d57,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{
Image:gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ff6c6518681d96dab31ba95ce65298861a2e2e2d5b2afbb168e6da22563c13d,State:CONTAINER_RUNNING,CreatedAt:1716209775813947254,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6fcd4f6f98-tzksc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 14c3ddef-1fef-49b7-84cc-6d33520ba034,},Annotations:map[string]string{io.kubernetes.container.hash: 6f3fb5ec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d486f81c5c6
a791dc681203b9599c0c88d2b09dba9529ee98c1844379956549f,PodSandboxId:238da6830db9a2368fabac6ea0dfd481c8ded1607f30cfa736627a52807a32a9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_RUNNING,CreatedAt:1716209758862842017,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c057ec77-ddf8-4ad7-9001-a7b4f48a2d00,},Annotations:map[string]string{io.kubernetes.container.hash: 8f84b6fb,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e66ec7f2ae77116414c457358da2b30db464a9cef54c6d94abb3330231e0638,PodSandboxId:345c392f5452dbde2b2794b633b10f9f223744240388bd1a1382a7f432edbde9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209748574401075,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0af02429-e13b-4886-993d-0d7815e2fb69,},Annotations:map[string]string{io.kubernetes.container.hash: a6b6c5f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0,PodSandboxId:79eaca6d020368574c107991ba037d147a5e6759ce595522b190b3208048e31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209745620204527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vp4g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9838e64-b32b-489f-8944-3a29c87892a6,},Annotations:map[string]string{io.kubernetes.container.hash: 49e4769a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protoc
ol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c,PodSandboxId:e6b145e6b7a46c742fc5409c92716b9ead3d1a007fe901777100e3e904d089f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209742892346346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpkr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a0dc50-43c6-4927-9c13-45e9104e2206,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: e884271,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e,PodSandboxId:a496785b5b5f53d70bf2eae6408569665774f695bf07f2aeec756a75513b9a74,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209724111327099,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc20fa6c7f57dfba2ef2611768216c5c,},Annotations:map[string]string{io.kubernetes.container.hash: e0715a78
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a,PodSandboxId:31de9fbe23d9ba41772d69015e4518cb70c8cdc7386fbac20bac4b64809284c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209724041565547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4bb06d2b47c119024d856c02f66b4d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.co
ntainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865,PodSandboxId:ef74fd5cfc67f15c584b795a04fe940f1036e1a37553be890a44cabfd410c9f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209724027576770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb210768643f1d2a3f5e71d39e6100ee,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kub
ernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34,PodSandboxId:d591c03b18dc1366ee18164ecc1c47eeaa8739d08ea95b335348aa29b028f291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209723992074280,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25de9c545ad63a4181a22d9d16ed13c1,},Annotations:map[string]string{io.kubernetes.container.hash: 1f6497a4,io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88107d8e-6d4f-4dec-8dd5-5d3d45a75dd2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.661265294Z" level=debug msg="Detected compression format gzip" file="compression/compression.go:126"
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.661331258Z" level=debug msg="Using original blob without modification" file="copy/compression.go:226"
	May 20 12:58:30 addons-840762 crio[679]: time="2024-05-20 12:58:30.661394792Z" level=debug msg="ImagePull (0): docker.io/library/busybox:stable (sha256:ec562eabd705d25bfea8c8d79e4610775e375524af00552fe871d3338261563c): 0 bytes (0.00%!)(MISSING)" file="server/image_pull.go:276" id=3f5eb95b-1a1e-42db-ad16-64549e1785ac name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	c70e006987f00       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a                            41 seconds ago       Exited              gadget                                   3                   fe89ff446540b       gadget-4r2zg
	b1c2248c8721c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   fbe68ca3b46fc       csi-hostpathplugin-k4gtt
	78efe68071e45       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          About a minute ago   Running             csi-provisioner                          0                   fbe68ca3b46fc       csi-hostpathplugin-k4gtt
	4ed1c65566577       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            About a minute ago   Running             liveness-probe                           0                   fbe68ca3b46fc       csi-hostpathplugin-k4gtt
	135a96f190c99       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 About a minute ago   Running             gcp-auth                                 0                   6684887cb09aa       gcp-auth-5db96cd9b4-cjjrn
	aae45bf683263       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           About a minute ago   Running             hostpath                                 0                   fbe68ca3b46fc       csi-hostpathplugin-k4gtt
	49ea9ad625840       registry.k8s.io/ingress-nginx/controller@sha256:959e313aceec9f38e18a329ca3756402959e84e63ae8ba7ac1ee48aec28d51b9                             About a minute ago   Running             controller                               0                   820ef6d8fc1bc       ingress-nginx-controller-768f948f8f-5dgl9
	294fefcae917c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                About a minute ago   Running             node-driver-registrar                    0                   fbe68ca3b46fc       csi-hostpathplugin-k4gtt
	9404eead98110       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   e20fe1e969842       csi-hostpath-resizer-0
	b679158bddae8       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   7ba9cfbccb5b8       csi-hostpath-attacher-0
	c39c547833242       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago   Running             csi-external-health-monitor-controller   0                   fbe68ca3b46fc       csi-hostpathplugin-k4gtt
	120b275a98f0e       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                                             About a minute ago   Exited              patch                                    1                   18e6d4697d58e       ingress-nginx-admission-patch-xpvg2
	c1b7f95bfe632       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2                   About a minute ago   Exited              create                                   0                   83f3bee4f32fc       ingress-nginx-admission-create-lglgg
	22ab732368a0f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   a64ee9ae2d3a4       snapshot-controller-745499f584-h6pwb
	e6c84fb970a19       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   fd5223edc2a06       snapshot-controller-745499f584-tskjh
	4982733f8de54       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   cb0fcb830dafd       local-path-provisioner-8d985888d-7zmth
	0e9db02ffacd4       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872                        About a minute ago   Running             metrics-server                           0                   354aac86fd4a4       metrics-server-c59844bb4-8g977
	5640739ae135d       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                              About a minute ago   Running             yakd                                     0                   47557659a5b0a       yakd-dashboard-5ddbf7d777-hgp7b
	ac2850631eedf       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  2 minutes ago        Running             tiller                                   0                   a6334b74e1b45       tiller-deploy-6677d64bcd-9z85l
	78fcce271acb3       gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4                               2 minutes ago        Running             cloud-spanner-emulator                   0                   b221456a6d2ca       cloud-spanner-emulator-6fcd4f6f98-tzksc
	d486f81c5c6a7       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             2 minutes ago        Running             minikube-ingress-dns                     0                   238da6830db9a       kube-ingress-dns-minikube
	8e66ec7f2ae77       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             2 minutes ago        Running             storage-provisioner                      0                   345c392f5452d       storage-provisioner
	7059a82048d9c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                                             2 minutes ago        Running             coredns                                  0                   79eaca6d02036       coredns-7db6d8ff4d-vp4g8
	a0af7ffce7a12       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                                             2 minutes ago        Running             kube-proxy                               0                   e6b145e6b7a46       kube-proxy-mpkr9
	10c3d12060059       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                                             3 minutes ago        Running             etcd                                     0                   a496785b5b5f5       etcd-addons-840762
	6363b2ba4829a       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                                             3 minutes ago        Running             kube-scheduler                           0                   31de9fbe23d9b       kube-scheduler-addons-840762
	6cca9c1fefcd5       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                                             3 minutes ago        Running             kube-controller-manager                  0                   ef74fd5cfc67f       kube-controller-manager-addons-840762
	9b2ffe0b08efe       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                                             3 minutes ago        Running             kube-apiserver                           0                   d591c03b18dc1       kube-apiserver-addons-840762
	
	
	==> coredns [7059a82048d9cbdba01944872dcc61dbc3d49db43b7d62ffc94501918860bba0] <==
	[INFO] 10.244.0.7:36312 - 833 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000133942s
	[INFO] 10.244.0.7:38189 - 20558 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000140292s
	[INFO] 10.244.0.7:38189 - 49744 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00020756s
	[INFO] 10.244.0.7:40716 - 37403 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000209728s
	[INFO] 10.244.0.7:40716 - 54809 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000338691s
	[INFO] 10.244.0.7:34802 - 60141 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076847s
	[INFO] 10.244.0.7:34802 - 13548 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001884167s
	[INFO] 10.244.0.7:46201 - 18591 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000198s
	[INFO] 10.244.0.7:46201 - 17818 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000450713s
	[INFO] 10.244.0.7:44069 - 5855 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000169721s
	[INFO] 10.244.0.7:44069 - 43219 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081091s
	[INFO] 10.244.0.7:48623 - 843 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089929s
	[INFO] 10.244.0.7:48623 - 64597 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000185645s
	[INFO] 10.244.0.7:51149 - 3489 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000100909s
	[INFO] 10.244.0.7:51149 - 15454 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000071835s
	[INFO] 10.244.0.22:56551 - 48499 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000373719s
	[INFO] 10.244.0.22:40318 - 16711 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000108319s
	[INFO] 10.244.0.22:39466 - 14127 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116621s
	[INFO] 10.244.0.22:54206 - 13934 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000058539s
	[INFO] 10.244.0.22:56712 - 54214 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114656s
	[INFO] 10.244.0.22:56107 - 36752 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091644s
	[INFO] 10.244.0.22:46924 - 25436 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001332083s
	[INFO] 10.244.0.22:54686 - 62944 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001589718s
	[INFO] 10.244.0.24:57177 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000620091s
	[INFO] 10.244.0.24:37965 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000088343s
	
	
	==> describe nodes <==
	Name:               addons-840762
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-840762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=addons-840762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T12_55_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-840762
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-840762"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:55:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-840762
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:58:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:57:32 +0000   Mon, 20 May 2024 12:55:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:57:32 +0000   Mon, 20 May 2024 12:55:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:57:32 +0000   Mon, 20 May 2024 12:55:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:57:32 +0000   Mon, 20 May 2024 12:55:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    addons-840762
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 0bc07a572c69424e8b07c61391a8d459
	  System UUID:                0bc07a57-2c69-424e-8b07-c61391a8d459
	  Boot ID:                    1b84f601-3379-4074-9d98-222bacd601d5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-tzksc      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  default                     test-local-path                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  gadget                      gadget-4r2zg                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  gcp-auth                    gcp-auth-5db96cd9b4-cjjrn                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-5dgl9    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         2m40s
	  kube-system                 coredns-7db6d8ff4d-vp4g8                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m49s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 csi-hostpathplugin-k4gtt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 etcd-addons-840762                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m2s
	  kube-system                 kube-apiserver-addons-840762                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	  kube-system                 kube-controller-manager-addons-840762        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 kube-proxy-mpkr9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	  kube-system                 kube-scheduler-addons-840762                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m4s
	  kube-system                 metrics-server-c59844bb4-8g977               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m43s
	  kube-system                 snapshot-controller-745499f584-h6pwb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 snapshot-controller-745499f584-tskjh         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 tiller-deploy-6677d64bcd-9z85l               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  local-path-storage          local-path-provisioner-8d985888d-7zmth       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-hgp7b              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     2m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m47s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m8s (x8 over 3m9s)  kubelet          Node addons-840762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m8s (x8 over 3m9s)  kubelet          Node addons-840762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m8s (x7 over 3m9s)  kubelet          Node addons-840762 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m2s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m2s                 kubelet          Node addons-840762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s                 kubelet          Node addons-840762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s                 kubelet          Node addons-840762 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m1s                 kubelet          Node addons-840762 status is now: NodeReady
	  Normal  RegisteredNode           2m50s                node-controller  Node addons-840762 event: Registered Node addons-840762 in Controller
	
	
	==> dmesg <==
	[  +3.997220] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +4.052508] systemd-fstab-generator[926]: Ignoring "noauto" option for root device
	[  +0.064164] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.983822] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	[  +0.075039] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.714952] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.134436] systemd-fstab-generator[1529]: Ignoring "noauto" option for root device
	[  +4.756950] kauditd_printk_skb: 104 callbacks suppressed
	[  +5.317555] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.571836] kauditd_printk_skb: 66 callbacks suppressed
	[May20 12:56] kauditd_printk_skb: 29 callbacks suppressed
	[ +12.326752] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.228650] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.092557] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.597356] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.620848] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.044087] kauditd_printk_skb: 61 callbacks suppressed
	[May20 12:57] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.406626] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.331220] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.843229] kauditd_printk_skb: 37 callbacks suppressed
	[May20 12:58] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.616141] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.048897] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.235566] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [10c3d120600592d0e46f54d8c3863d394465b0dd91cffcc4c09b5d9ef9a0ad7e] <==
	{"level":"warn","ts":"2024-05-20T12:57:02.321097Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:57:01.88031Z","time spent":"440.773431ms","remote":"127.0.0.1:53166","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14379,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-05-20T12:57:02.320322Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.60948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85485"}
	{"level":"info","ts":"2024-05-20T12:57:02.321521Z","caller":"traceutil/trace.go:171","msg":"trace[2119614539] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1143; }","duration":"213.7996ms","start":"2024-05-20T12:57:02.107675Z","end":"2024-05-20T12:57:02.321474Z","steps":["trace[2119614539] 'agreement among raft nodes before linearized reading'  (duration: 212.506688ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:57:02.320376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.90392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-8g977\" ","response":"range_response_count:1 size:4456"}
	{"level":"info","ts":"2024-05-20T12:57:02.322075Z","caller":"traceutil/trace.go:171","msg":"trace[402781208] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-8g977; range_end:; response_count:1; response_revision:1143; }","duration":"229.620558ms","start":"2024-05-20T12:57:02.092443Z","end":"2024-05-20T12:57:02.322064Z","steps":["trace[402781208] 'agreement among raft nodes before linearized reading'  (duration: 227.894066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:57:02.320427Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.989566ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-05-20T12:57:02.322823Z","caller":"traceutil/trace.go:171","msg":"trace[1202776357] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1143; }","duration":"276.407737ms","start":"2024-05-20T12:57:02.046401Z","end":"2024-05-20T12:57:02.322809Z","steps":["trace[1202776357] 'agreement among raft nodes before linearized reading'  (duration: 273.988404ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:57:09.105325Z","caller":"traceutil/trace.go:171","msg":"trace[1398827240] transaction","detail":"{read_only:false; response_revision:1175; number_of_response:1; }","duration":"316.084124ms","start":"2024-05-20T12:57:08.789227Z","end":"2024-05-20T12:57:09.105311Z","steps":["trace[1398827240] 'process raft request'  (duration: 315.952768ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:57:09.105525Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:57:08.789209Z","time spent":"316.215355ms","remote":"127.0.0.1:53244","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-q7ycbuju4r3pza3uylsengkn3e\" mod_revision:1128 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-q7ycbuju4r3pza3uylsengkn3e\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-q7ycbuju4r3pza3uylsengkn3e\" > >"}
	{"level":"info","ts":"2024-05-20T12:57:09.106442Z","caller":"traceutil/trace.go:171","msg":"trace[326326687] linearizableReadLoop","detail":"{readStateIndex:1213; appliedIndex:1213; }","duration":"209.919064ms","start":"2024-05-20T12:57:08.89651Z","end":"2024-05-20T12:57:09.106429Z","steps":["trace[326326687] 'read index received'  (duration: 209.914275ms)","trace[326326687] 'applied index is now lower than readState.Index'  (duration: 4.136µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T12:57:09.106829Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.885262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11447"}
	{"level":"info","ts":"2024-05-20T12:57:09.106896Z","caller":"traceutil/trace.go:171","msg":"trace[1504113938] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1175; }","duration":"144.001652ms","start":"2024-05-20T12:57:08.962878Z","end":"2024-05-20T12:57:09.10688Z","steps":["trace[1504113938] 'agreement among raft nodes before linearized reading'  (duration: 143.810281ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:57:09.107098Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.589083ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-8g977.17d133b933159048\" ","response":"range_response_count:1 size:813"}
	{"level":"info","ts":"2024-05-20T12:57:09.107231Z","caller":"traceutil/trace.go:171","msg":"trace[2064512433] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-c59844bb4-8g977.17d133b933159048; range_end:; response_count:1; response_revision:1175; }","duration":"210.737054ms","start":"2024-05-20T12:57:08.896486Z","end":"2024-05-20T12:57:09.107223Z","steps":["trace[2064512433] 'agreement among raft nodes before linearized reading'  (duration: 210.56075ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:57:45.751353Z","caller":"traceutil/trace.go:171","msg":"trace[882681731] linearizableReadLoop","detail":"{readStateIndex:1334; appliedIndex:1333; }","duration":"159.848303ms","start":"2024-05-20T12:57:45.591489Z","end":"2024-05-20T12:57:45.751337Z","steps":["trace[882681731] 'read index received'  (duration: 159.544489ms)","trace[882681731] 'applied index is now lower than readState.Index'  (duration: 303.24µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T12:57:45.751582Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.05588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-8g977\" ","response":"range_response_count:1 size:4456"}
	{"level":"info","ts":"2024-05-20T12:57:45.751624Z","caller":"traceutil/trace.go:171","msg":"trace[1190867087] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-8g977; range_end:; response_count:1; response_revision:1286; }","duration":"160.147841ms","start":"2024-05-20T12:57:45.591463Z","end":"2024-05-20T12:57:45.751611Z","steps":["trace[1190867087] 'agreement among raft nodes before linearized reading'  (duration: 159.984942ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:57:45.751907Z","caller":"traceutil/trace.go:171","msg":"trace[346556204] transaction","detail":"{read_only:false; response_revision:1286; number_of_response:1; }","duration":"270.337258ms","start":"2024-05-20T12:57:45.481561Z","end":"2024-05-20T12:57:45.751899Z","steps":["trace[346556204] 'process raft request'  (duration: 269.51066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:58:18.70607Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:58:18.237789Z","time spent":"468.269671ms","remote":"127.0.0.1:52996","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-05-20T12:58:18.706201Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"329.609863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:58:18.70625Z","caller":"traceutil/trace.go:171","msg":"trace[779024922] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1361; }","duration":"329.774357ms","start":"2024-05-20T12:58:18.376465Z","end":"2024-05-20T12:58:18.706239Z","steps":["trace[779024922] 'agreement among raft nodes before linearized reading'  (duration: 329.619372ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:58:18.706322Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:58:18.376449Z","time spent":"329.864586ms","remote":"127.0.0.1:52954","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-05-20T12:58:18.70606Z","caller":"traceutil/trace.go:171","msg":"trace[1211898191] linearizableReadLoop","detail":"{readStateIndex:1416; appliedIndex:1415; }","duration":"329.524623ms","start":"2024-05-20T12:58:18.3765Z","end":"2024-05-20T12:58:18.706024Z","steps":["trace[1211898191] 'read index received'  (duration: 329.28852ms)","trace[1211898191] 'applied index is now lower than readState.Index'  (duration: 235.026µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T12:58:18.706606Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.994414ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85719"}
	{"level":"info","ts":"2024-05-20T12:58:18.706632Z","caller":"traceutil/trace.go:171","msg":"trace[1149837529] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1361; }","duration":"145.072721ms","start":"2024-05-20T12:58:18.561551Z","end":"2024-05-20T12:58:18.706624Z","steps":["trace[1149837529] 'agreement among raft nodes before linearized reading'  (duration: 144.887626ms)"],"step_count":1}
	
	
	==> gcp-auth [135a96f190c99f5e260335c022f15f4c6f9bbea70184e9e733be8cff9027ea7c] <==
	2024/05/20 12:57:09 GCP Auth Webhook started!
	2024/05/20 12:58:12 Ready to marshal response ...
	2024/05/20 12:58:12 Ready to write response ...
	2024/05/20 12:58:12 Ready to marshal response ...
	2024/05/20 12:58:12 Ready to write response ...
	2024/05/20 12:58:19 Ready to marshal response ...
	2024/05/20 12:58:19 Ready to write response ...
	2024/05/20 12:58:19 Ready to marshal response ...
	2024/05/20 12:58:19 Ready to write response ...
	
	
	==> kernel <==
	 12:58:31 up 3 min,  0 users,  load average: 1.42, 1.35, 0.59
	Linux addons-840762 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9b2ffe0b08efee67568aaf2738f334bf710467def1071d1590bbe7c26cc06a34] <==
	I0520 12:55:49.850630       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 12:55:49.877327       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 12:55:49.877360       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 12:55:50.939380       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.105.127.218"}
	I0520 12:55:51.005762       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.96.26.245"}
	I0520 12:55:51.114723       1 controller.go:615] quota admission added evaluator for: jobs.batch
	I0520 12:55:52.567578       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.105.203.105"}
	I0520 12:55:52.606101       1 controller.go:615] quota admission added evaluator for: statefulsets.apps
	I0520 12:55:52.907714       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.98.174.177"}
	I0520 12:55:54.666239       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.106.70.119"}
	W0520 12:56:49.135687       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:56:49.135795       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 12:56:49.135805       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:56:49.142444       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:56:49.142498       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 12:56:49.142509       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:57:48.933684       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:57:48.934987       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0520 12:57:48.934742       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.14.240:443: connect: connection refused
	E0520 12:57:48.937295       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.14.240:443: connect: connection refused
	E0520 12:57:48.944679       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.14.240:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.14.240:443: connect: connection refused
	I0520 12:57:49.061190       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0520 12:58:27.221268       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [6cca9c1fefcd523bfea8bb7ba178beb2f837923c6c1773dc6490db5c868b1865] <==
	I0520 12:56:58.323603       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0520 12:56:58.376267       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0520 12:56:59.397988       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0520 12:56:59.505697       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0520 12:57:00.408690       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0520 12:57:00.417761       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0520 12:57:00.422825       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0520 12:57:02.653464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-745499f584" duration="8.711957ms"
	I0520 12:57:02.654271       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-745499f584" duration="55.356µs"
	I0520 12:57:04.621490       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="58.684µs"
	I0520 12:57:09.679824       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="24.974119ms"
	I0520 12:57:09.679946       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="76.046µs"
	E0520 12:57:12.066797       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:57:12.533210       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0520 12:57:15.178489       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="45.467509ms"
	I0520 12:57:15.178612       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="69.093µs"
	I0520 12:57:28.019571       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0520 12:57:28.083418       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0520 12:57:30.012438       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0520 12:57:30.044362       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	E0520 12:57:42.071621       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:57:42.541002       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0520 12:57:48.923347       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="12.915782ms"
	I0520 12:57:48.923458       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="58.106µs"
	I0520 12:58:24.095871       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="6.893µs"
	
	
	==> kube-proxy [a0af7ffce7a1284208f780dff18dcbcefc9abe8a81baa8367f235bb407165a8c] <==
	I0520 12:55:43.546950       1 server_linux.go:69] "Using iptables proxy"
	I0520 12:55:43.566793       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.19"]
	I0520 12:55:43.676877       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 12:55:43.676950       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 12:55:43.676967       1 server_linux.go:165] "Using iptables Proxier"
	I0520 12:55:43.680164       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 12:55:43.680354       1 server.go:872] "Version info" version="v1.30.1"
	I0520 12:55:43.680369       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:55:43.683205       1 config.go:192] "Starting service config controller"
	I0520 12:55:43.683236       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 12:55:43.683271       1 config.go:101] "Starting endpoint slice config controller"
	I0520 12:55:43.683275       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 12:55:43.683845       1 config.go:319] "Starting node config controller"
	I0520 12:55:43.683852       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 12:55:43.783393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 12:55:43.783421       1 shared_informer.go:320] Caches are synced for service config
	I0520 12:55:43.784288       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6363b2ba4829a05f15abd8690be759abe089bc3e7fdfbb93dae386b798c7477a] <==
	W0520 12:55:26.558819       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 12:55:26.558844       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 12:55:26.558898       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 12:55:26.558919       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 12:55:26.559021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 12:55:26.559083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 12:55:27.360805       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 12:55:27.360863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 12:55:27.410660       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 12:55:27.410724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 12:55:27.421620       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 12:55:27.421692       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 12:55:27.595975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 12:55:27.596024       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 12:55:27.615749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 12:55:27.615778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 12:55:27.672874       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 12:55:27.672999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 12:55:27.707891       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 12:55:27.707934       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 12:55:27.803616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 12:55:27.803709       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 12:55:27.813500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 12:55:27.813540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0520 12:55:30.530739       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 12:58:28 addons-840762 kubelet[1276]: I0520 12:58:28.044409    1276 memory_manager.go:354] "RemoveStaleState removing state" podUID="88344eab-652a-4d9d-9f7f-171aa2936225" containerName="nvidia-device-plugin-ctr"
	May 20 12:58:28 addons-840762 kubelet[1276]: I0520 12:58:28.044440    1276 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca35b86e-6424-40e0-a0d6-cbd41f0ccab0" containerName="registry-proxy"
	May 20 12:58:28 addons-840762 kubelet[1276]: I0520 12:58:28.199206    1276 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ef6f8a93-1567-44f6-8095-fb964ae1388e\" (UniqueName: \"kubernetes.io/host-path/429dc7e0-07e3-454c-9504-ac4e03b1842d-pvc-ef6f8a93-1567-44f6-8095-fb964ae1388e\") pod \"test-local-path\" (UID: \"429dc7e0-07e3-454c-9504-ac4e03b1842d\") " pod="default/test-local-path"
	May 20 12:58:28 addons-840762 kubelet[1276]: I0520 12:58:28.199412    1276 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/429dc7e0-07e3-454c-9504-ac4e03b1842d-gcp-creds\") pod \"test-local-path\" (UID: \"429dc7e0-07e3-454c-9504-ac4e03b1842d\") " pod="default/test-local-path"
	May 20 12:58:28 addons-840762 kubelet[1276]: I0520 12:58:28.199535    1276 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvbk7\" (UniqueName: \"kubernetes.io/projected/429dc7e0-07e3-454c-9504-ac4e03b1842d-kube-api-access-bvbk7\") pod \"test-local-path\" (UID: \"429dc7e0-07e3-454c-9504-ac4e03b1842d\") " pod="default/test-local-path"
	May 20 12:58:28 addons-840762 kubelet[1276]: I0520 12:58:28.807238    1276 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9af51654-cb9f-422e-b702-34c99a24bdc7-gcp-creds\") pod \"9af51654-cb9f-422e-b702-34c99a24bdc7\" (UID: \"9af51654-cb9f-422e-b702-34c99a24bdc7\") "
	May 20 12:58:28 addons-840762 kubelet[1276]: I0520 12:58:28.807400    1276 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9d01c29b-16a8-11ef-adf7-7a71fb7dcc5e\") pod \"9af51654-cb9f-422e-b702-34c99a24bdc7\" (UID: \"9af51654-cb9f-422e-b702-34c99a24bdc7\") "
	May 20 12:58:28 addons-840762 kubelet[1276]: I0520 12:58:28.807428    1276 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z759l\" (UniqueName: \"kubernetes.io/projected/9af51654-cb9f-422e-b702-34c99a24bdc7-kube-api-access-z759l\") pod \"9af51654-cb9f-422e-b702-34c99a24bdc7\" (UID: \"9af51654-cb9f-422e-b702-34c99a24bdc7\") "
	May 20 12:58:28 addons-840762 kubelet[1276]: I0520 12:58:28.807809    1276 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9af51654-cb9f-422e-b702-34c99a24bdc7-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "9af51654-cb9f-422e-b702-34c99a24bdc7" (UID: "9af51654-cb9f-422e-b702-34c99a24bdc7"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 20 12:58:28 addons-840762 kubelet[1276]: I0520 12:58:28.810973    1276 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af51654-cb9f-422e-b702-34c99a24bdc7-kube-api-access-z759l" (OuterVolumeSpecName: "kube-api-access-z759l") pod "9af51654-cb9f-422e-b702-34c99a24bdc7" (UID: "9af51654-cb9f-422e-b702-34c99a24bdc7"). InnerVolumeSpecName "kube-api-access-z759l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 20 12:58:28 addons-840762 kubelet[1276]: I0520 12:58:28.811850    1276 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^9d01c29b-16a8-11ef-adf7-7a71fb7dcc5e" (OuterVolumeSpecName: "task-pv-storage") pod "9af51654-cb9f-422e-b702-34c99a24bdc7" (UID: "9af51654-cb9f-422e-b702-34c99a24bdc7"). InnerVolumeSpecName "pvc-f66d7933-7755-4853-9c7c-4c04737dc612". PluginName "kubernetes.io/csi", VolumeGidValue ""
	May 20 12:58:28 addons-840762 kubelet[1276]: I0520 12:58:28.908291    1276 reconciler_common.go:289] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9af51654-cb9f-422e-b702-34c99a24bdc7-gcp-creds\") on node \"addons-840762\" DevicePath \"\""
	May 20 12:58:28 addons-840762 kubelet[1276]: I0520 12:58:28.908421    1276 reconciler_common.go:282] "operationExecutor.UnmountDevice started for volume \"pvc-f66d7933-7755-4853-9c7c-4c04737dc612\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9d01c29b-16a8-11ef-adf7-7a71fb7dcc5e\") on node \"addons-840762\" "
	May 20 12:58:28 addons-840762 kubelet[1276]: I0520 12:58:28.908503    1276 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-z759l\" (UniqueName: \"kubernetes.io/projected/9af51654-cb9f-422e-b702-34c99a24bdc7-kube-api-access-z759l\") on node \"addons-840762\" DevicePath \"\""
	May 20 12:58:28 addons-840762 kubelet[1276]: I0520 12:58:28.915944    1276 operation_generator.go:1001] UnmountDevice succeeded for volume "pvc-f66d7933-7755-4853-9c7c-4c04737dc612" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^9d01c29b-16a8-11ef-adf7-7a71fb7dcc5e") on node "addons-840762"
	May 20 12:58:29 addons-840762 kubelet[1276]: I0520 12:58:29.009316    1276 reconciler_common.go:289] "Volume detached for volume \"pvc-f66d7933-7755-4853-9c7c-4c04737dc612\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9d01c29b-16a8-11ef-adf7-7a71fb7dcc5e\") on node \"addons-840762\" DevicePath \"\""
	May 20 12:58:29 addons-840762 kubelet[1276]: E0520 12:58:29.417274    1276 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:58:29 addons-840762 kubelet[1276]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:58:29 addons-840762 kubelet[1276]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:58:29 addons-840762 kubelet[1276]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:58:29 addons-840762 kubelet[1276]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:58:29 addons-840762 kubelet[1276]: I0520 12:58:29.474550    1276 scope.go:117] "RemoveContainer" containerID="8d39e1ecb70aa349bffad597eb7000aae78aa74cc70deb59b06f1a5628ff82d3"
	May 20 12:58:29 addons-840762 kubelet[1276]: I0520 12:58:29.492966    1276 scope.go:117] "RemoveContainer" containerID="0561e36dd4b07b20d8acf944cb0fd5b3ef99bfbc914266f692a696efca23e0f1"
	May 20 12:58:29 addons-840762 kubelet[1276]: I0520 12:58:29.551690    1276 scope.go:117] "RemoveContainer" containerID="a5c33b19087e35766b61a555cb6613b1ee826492c8f93e642cb23ffd675a60af"
	May 20 12:58:31 addons-840762 kubelet[1276]: I0520 12:58:31.398072    1276 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af51654-cb9f-422e-b702-34c99a24bdc7" path="/var/lib/kubelet/pods/9af51654-cb9f-422e-b702-34c99a24bdc7/volumes"
	
	
	==> storage-provisioner [8e66ec7f2ae77116414c457358da2b30db464a9cef54c6d94abb3330231e0638] <==
	I0520 12:55:49.365029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 12:55:49.426698       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 12:55:49.426754       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 12:55:49.559886       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 12:55:49.560949       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc1e5491-e0b6-4a74-9796-3c1c2ff6413c", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-840762_c6e61084-5a81-4fcf-a1aa-29d22ee4d88e became leader
	I0520 12:55:49.567383       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-840762_c6e61084-5a81-4fcf-a1aa-29d22ee4d88e!
	I0520 12:55:49.775880       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-840762_c6e61084-5a81-4fcf-a1aa-29d22ee4d88e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-840762 -n addons-840762
helpers_test.go:261: (dbg) Run:  kubectl --context addons-840762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: test-local-path ingress-nginx-admission-create-lglgg ingress-nginx-admission-patch-xpvg2
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CloudSpanner]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-840762 describe pod test-local-path ingress-nginx-admission-create-lglgg ingress-nginx-admission-patch-xpvg2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-840762 describe pod test-local-path ingress-nginx-admission-create-lglgg ingress-nginx-admission-patch-xpvg2: exit status 1 (88.849376ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-840762/192.168.39.19
	Start Time:       Mon, 20 May 2024 12:58:28 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  cri-o://a2b5562339477379ed3d06bdcee83478d2baf36cd5ce920b4bb32ecc7966c187
	    Image:         busybox:stable
	    Image ID:      65ad0d468eb1c558bf7f4e64e790f586e9eda649ee9f130cd0e835b292bbc5ac
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 20 May 2024 12:58:31 +0000
	      Finished:     Mon, 20 May 2024 12:58:31 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bvbk7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-bvbk7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  4s    default-scheduler  Successfully assigned default/test-local-path to addons-840762
	  Normal  Pulling    4s    kubelet            Pulling image "busybox:stable"
	  Normal  Pulled     2s    kubelet            Successfully pulled image "busybox:stable" in 2.394s (2.395s including waiting). Image size: 4503713 bytes.
	  Normal  Created    1s    kubelet            Created container busybox
	  Normal  Started    1s    kubelet            Started container busybox

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lglgg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xpvg2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-840762 describe pod test-local-path ingress-nginx-admission-create-lglgg ingress-nginx-admission-patch-xpvg2: exit status 1
--- FAIL: TestAddons/parallel/CloudSpanner (7.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-840762
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-840762: exit status 82 (2m0.480121133s)

                                                
                                                
-- stdout --
	* Stopping node "addons-840762"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-840762" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-840762
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-840762: exit status 11 (21.659867567s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-840762" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-840762
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-840762: exit status 11 (6.14410053s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-840762" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-840762
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-840762: exit status 11 (6.143333505s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-840762" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.43s)

                                                
                                    
x
+
TestErrorSpam/setup (41.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-609784 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-609784 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-609784 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-609784 --driver=kvm2  --container-runtime=crio: (41.249797217s)
error_spam_test.go:96: unexpected stderr: "! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1"
error_spam_test.go:110: minikube stdout:
* [nospam-609784] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=18929
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting "nospam-609784" primary control-plane node in "nospam-609784" cluster
* Creating kvm2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-609784" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
--- FAIL: TestErrorSpam/setup (41.25s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (1064.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-694790 --alsologtostderr -v=8
E0520 13:08:42.770002  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:09:23.731001  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:10:45.651678  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:13:01.807735  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:13:29.492805  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:18:01.807469  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:23:01.807874  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:24:24.853609  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
functional_test.go:655: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-694790 --alsologtostderr -v=8: exit status 80 (17m42.622927265s)

                                                
                                                
-- stdout --
	* [functional-694790] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18929
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "functional-694790" primary control-plane node in "functional-694790" cluster
	* Updating the running kvm2 "functional-694790" VM ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:08:26.734453  616253 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:08:26.734693  616253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:08:26.734702  616253 out.go:304] Setting ErrFile to fd 2...
	I0520 13:08:26.734706  616253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:08:26.734906  616253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:08:26.735440  616253 out.go:298] Setting JSON to false
	I0520 13:08:26.736363  616253 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10247,"bootTime":1716200260,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 13:08:26.736424  616253 start.go:139] virtualization: kvm guest
	I0520 13:08:26.739826  616253 out.go:177] * [functional-694790] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 13:08:26.742194  616253 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 13:08:26.744238  616253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 13:08:26.742257  616253 notify.go:220] Checking for updates...
	I0520 13:08:26.746662  616253 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 13:08:26.749083  616253 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:08:26.751463  616253 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 13:08:26.753716  616253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 13:08:26.756362  616253 config.go:182] Loaded profile config "functional-694790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:08:26.756482  616253 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 13:08:26.756919  616253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:08:26.756982  616253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:08:26.772413  616253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35281
	I0520 13:08:26.772986  616253 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:08:26.773584  616253 main.go:141] libmachine: Using API Version  1
	I0520 13:08:26.773611  616253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:08:26.774051  616253 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:08:26.774291  616253 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:08:26.812111  616253 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 13:08:26.814376  616253 start.go:297] selected driver: kvm2
	I0520 13:08:26.814397  616253 start.go:901] validating driver "kvm2" against &{Name:functional-694790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-694790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:08:26.814513  616253 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 13:08:26.814855  616253 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:08:26.814958  616253 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 13:08:26.830243  616253 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 13:08:26.831035  616253 cni.go:84] Creating CNI manager for ""
	I0520 13:08:26.831050  616253 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 13:08:26.831117  616253 start.go:340] cluster config:
	{Name:functional-694790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-694790 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:08:26.831235  616253 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:08:26.833917  616253 out.go:177] * Starting "functional-694790" primary control-plane node in "functional-694790" cluster
	I0520 13:08:26.836341  616253 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:08:26.836386  616253 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 13:08:26.836400  616253 cache.go:56] Caching tarball of preloaded images
	I0520 13:08:26.836504  616253 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:08:26.836520  616253 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 13:08:26.836629  616253 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/config.json ...
	I0520 13:08:26.836851  616253 start.go:360] acquireMachinesLock for functional-694790: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:08:26.836903  616253 start.go:364] duration metric: took 30.525µs to acquireMachinesLock for "functional-694790"
	I0520 13:08:26.836924  616253 start.go:96] Skipping create...Using existing machine configuration
	I0520 13:08:26.836933  616253 fix.go:54] fixHost starting: 
	I0520 13:08:26.837222  616253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:08:26.837311  616253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:08:26.853563  616253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36961
	I0520 13:08:26.854176  616253 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:08:26.854715  616253 main.go:141] libmachine: Using API Version  1
	I0520 13:08:26.854742  616253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:08:26.855108  616253 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:08:26.855362  616253 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:08:26.855569  616253 main.go:141] libmachine: (functional-694790) Calling .GetState
	I0520 13:08:26.857420  616253 fix.go:112] recreateIfNeeded on functional-694790: state=Running err=<nil>
	W0520 13:08:26.857465  616253 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 13:08:26.860286  616253 out.go:177] * Updating the running kvm2 "functional-694790" VM ...
	I0520 13:08:26.862456  616253 machine.go:94] provisionDockerMachine start ...
	I0520 13:08:26.862479  616253 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:08:26.862698  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:26.865144  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:26.865615  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:26.865647  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:26.865790  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:26.865983  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:26.866151  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:26.866285  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:26.866437  616253 main.go:141] libmachine: Using SSH client type: native
	I0520 13:08:26.866613  616253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0520 13:08:26.866623  616253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 13:08:26.973470  616253 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-694790
	
	I0520 13:08:26.973500  616253 main.go:141] libmachine: (functional-694790) Calling .GetMachineName
	I0520 13:08:26.973764  616253 buildroot.go:166] provisioning hostname "functional-694790"
	I0520 13:08:26.973782  616253 main.go:141] libmachine: (functional-694790) Calling .GetMachineName
	I0520 13:08:26.973994  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:26.977222  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:26.977605  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:26.977647  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:26.977834  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:26.978062  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:26.978271  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:26.978423  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:26.978586  616253 main.go:141] libmachine: Using SSH client type: native
	I0520 13:08:26.978749  616253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0520 13:08:26.978761  616253 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-694790 && echo "functional-694790" | sudo tee /etc/hostname
	I0520 13:08:27.099833  616253 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-694790
	
	I0520 13:08:27.099873  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:27.102963  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.103308  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:27.103346  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.103529  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:27.103755  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:27.103950  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:27.104145  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:27.104302  616253 main.go:141] libmachine: Using SSH client type: native
	I0520 13:08:27.104474  616253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0520 13:08:27.104491  616253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-694790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-694790/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-694790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 13:08:27.209794  616253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:08:27.209827  616253 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 13:08:27.209849  616253 buildroot.go:174] setting up certificates
	I0520 13:08:27.209857  616253 provision.go:84] configureAuth start
	I0520 13:08:27.209869  616253 main.go:141] libmachine: (functional-694790) Calling .GetMachineName
	I0520 13:08:27.210191  616253 main.go:141] libmachine: (functional-694790) Calling .GetIP
	I0520 13:08:27.213131  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.213532  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:27.213564  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.213740  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:27.216163  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.216506  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:27.216540  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.216638  616253 provision.go:143] copyHostCerts
	I0520 13:08:27.216804  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:08:27.216852  616253 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 13:08:27.216873  616253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:08:27.216955  616253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 13:08:27.217072  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:08:27.217094  616253 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 13:08:27.217098  616253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:08:27.217124  616253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 13:08:27.217175  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:08:27.217195  616253 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 13:08:27.217202  616253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:08:27.217226  616253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 13:08:27.217329  616253 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.functional-694790 san=[127.0.0.1 192.168.39.165 functional-694790 localhost minikube]
	I0520 13:08:27.347990  616253 provision.go:177] copyRemoteCerts
	I0520 13:08:27.348054  616253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 13:08:27.348080  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:27.351038  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.351400  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:27.351438  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.351599  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:27.351744  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:27.351879  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:27.352065  616253 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/functional-694790/id_rsa Username:docker}
	I0520 13:08:27.440528  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 13:08:27.440603  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0520 13:08:27.469409  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 13:08:27.469498  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 13:08:27.492300  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 13:08:27.492393  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 13:08:27.516420  616253 provision.go:87] duration metric: took 306.549523ms to configureAuth
	I0520 13:08:27.516454  616253 buildroot.go:189] setting minikube options for container-runtime
	I0520 13:08:27.516636  616253 config.go:182] Loaded profile config "functional-694790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:08:27.516739  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:27.519724  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.520079  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:27.520114  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.520212  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:27.520556  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:27.520757  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:27.520945  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:27.521178  616253 main.go:141] libmachine: Using SSH client type: native
	I0520 13:08:27.521400  616253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0520 13:08:27.521418  616253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 13:08:33.122283  616253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 13:08:33.122316  616253 machine.go:97] duration metric: took 6.259843677s to provisionDockerMachine
	I0520 13:08:33.122331  616253 start.go:293] postStartSetup for "functional-694790" (driver="kvm2")
	I0520 13:08:33.122343  616253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 13:08:33.122362  616253 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:08:33.122709  616253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 13:08:33.122735  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:33.125553  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.125975  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:33.126011  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.126167  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:33.126381  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:33.126601  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:33.126757  616253 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/functional-694790/id_rsa Username:docker}
	I0520 13:08:33.212290  616253 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 13:08:33.216257  616253 command_runner.go:130] > NAME=Buildroot
	I0520 13:08:33.216287  616253 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 13:08:33.216291  616253 command_runner.go:130] > ID=buildroot
	I0520 13:08:33.216295  616253 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 13:08:33.216301  616253 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 13:08:33.216338  616253 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 13:08:33.216359  616253 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 13:08:33.216433  616253 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 13:08:33.216559  616253 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 13:08:33.216572  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /etc/ssl/certs/6098672.pem
	I0520 13:08:33.216635  616253 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/test/nested/copy/609867/hosts -> hosts in /etc/test/nested/copy/609867
	I0520 13:08:33.216644  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/test/nested/copy/609867/hosts -> /etc/test/nested/copy/609867/hosts
	I0520 13:08:33.216678  616253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/609867
	I0520 13:08:33.226292  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:08:33.249802  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/test/nested/copy/609867/hosts --> /etc/test/nested/copy/609867/hosts (40 bytes)
	I0520 13:08:33.272580  616253 start.go:296] duration metric: took 150.233991ms for postStartSetup
	I0520 13:08:33.272636  616253 fix.go:56] duration metric: took 6.435701648s for fixHost
	I0520 13:08:33.272683  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:33.275729  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.276119  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:33.276158  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.276313  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:33.276554  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:33.276736  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:33.276936  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:33.277228  616253 main.go:141] libmachine: Using SSH client type: native
	I0520 13:08:33.277439  616253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0520 13:08:33.277450  616253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 13:08:33.381944  616253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716210513.374511326
	
	I0520 13:08:33.381978  616253 fix.go:216] guest clock: 1716210513.374511326
	I0520 13:08:33.381989  616253 fix.go:229] Guest: 2024-05-20 13:08:33.374511326 +0000 UTC Remote: 2024-05-20 13:08:33.272641604 +0000 UTC m=+6.572255559 (delta=101.869722ms)
	I0520 13:08:33.382022  616253 fix.go:200] guest clock delta is within tolerance: 101.869722ms
	I0520 13:08:33.382031  616253 start.go:83] releasing machines lock for "functional-694790", held for 6.54511315s
	I0520 13:08:33.382067  616253 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:08:33.382358  616253 main.go:141] libmachine: (functional-694790) Calling .GetIP
	I0520 13:08:33.384955  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.385314  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:33.385351  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.385466  616253 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:08:33.386084  616253 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:08:33.386262  616253 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:08:33.386329  616253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 13:08:33.386379  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:33.386531  616253 ssh_runner.go:195] Run: cat /version.json
	I0520 13:08:33.386562  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:33.389148  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.389509  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:33.389541  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.389716  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:33.389747  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.389921  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:33.390092  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:33.390151  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:33.390177  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.390265  616253 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/functional-694790/id_rsa Username:docker}
	I0520 13:08:33.390391  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:33.390555  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:33.390751  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:33.390904  616253 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/functional-694790/id_rsa Username:docker}
	I0520 13:08:33.498976  616253 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 13:08:33.499045  616253 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	W0520 13:08:33.499190  616253 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 13:08:33.499298  616253 ssh_runner.go:195] Run: systemctl --version
	I0520 13:08:33.505479  616253 command_runner.go:130] > systemd 252 (252)
	I0520 13:08:33.505527  616253 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 13:08:33.505616  616253 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 13:08:33.902962  616253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 13:08:33.967723  616253 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 13:08:33.968221  616253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 13:08:33.968307  616253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 13:08:34.025380  616253 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 13:08:34.025424  616253 start.go:494] detecting cgroup driver to use...
	I0520 13:08:34.025507  616253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 13:08:34.120247  616253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 13:08:34.176301  616253 docker.go:217] disabling cri-docker service (if available) ...
	I0520 13:08:34.176389  616253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 13:08:34.210162  616253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 13:08:34.237578  616253 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 13:08:34.477160  616253 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 13:08:34.701226  616253 docker.go:233] disabling docker service ...
	I0520 13:08:34.701358  616253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 13:08:34.754859  616253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 13:08:34.777764  616253 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 13:08:34.959596  616253 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 13:08:35.149350  616253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 13:08:35.163721  616253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 13:08:35.185396  616253 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0520 13:08:35.185566  616253 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 13:08:35.185642  616253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:08:35.198694  616253 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 13:08:35.198788  616253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:08:35.210858  616253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:08:35.221722  616253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:08:35.234110  616253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 13:08:35.245952  616253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:08:35.256480  616253 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:08:35.267035  616253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:08:35.277016  616253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 13:08:35.286106  616253 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 13:08:35.286469  616253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 13:08:35.296318  616253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:08:35.468962  616253 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 13:10:05.972393  616253 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.503367989s)
	I0520 13:10:05.972450  616253 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 13:10:05.972520  616253 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 13:10:05.977889  616253 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0520 13:10:05.977918  616253 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 13:10:05.977926  616253 command_runner.go:130] > Device: 0,22	Inode: 1640        Links: 1
	I0520 13:10:05.977937  616253 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 13:10:05.977942  616253 command_runner.go:130] > Access: 2024-05-20 13:10:05.747954310 +0000
	I0520 13:10:05.977951  616253 command_runner.go:130] > Modify: 2024-05-20 13:10:05.747954310 +0000
	I0520 13:10:05.977958  616253 command_runner.go:130] > Change: 2024-05-20 13:10:05.747954310 +0000
	I0520 13:10:05.977964  616253 command_runner.go:130] >  Birth: -
	I0520 13:10:05.977997  616253 start.go:562] Will wait 60s for crictl version
	I0520 13:10:05.978066  616253 ssh_runner.go:195] Run: which crictl
	I0520 13:10:05.981829  616253 command_runner.go:130] > /usr/bin/crictl
	I0520 13:10:05.981911  616253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 13:10:06.019728  616253 command_runner.go:130] > Version:  0.1.0
	I0520 13:10:06.019757  616253 command_runner.go:130] > RuntimeName:  cri-o
	I0520 13:10:06.019763  616253 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0520 13:10:06.019771  616253 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 13:10:06.020771  616253 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 13:10:06.020873  616253 ssh_runner.go:195] Run: crio --version
	I0520 13:10:06.047914  616253 command_runner.go:130] > crio version 1.29.1
	I0520 13:10:06.047945  616253 command_runner.go:130] > Version:        1.29.1
	I0520 13:10:06.047951  616253 command_runner.go:130] > GitCommit:      unknown
	I0520 13:10:06.047955  616253 command_runner.go:130] > GitCommitDate:  unknown
	I0520 13:10:06.047959  616253 command_runner.go:130] > GitTreeState:   clean
	I0520 13:10:06.047965  616253 command_runner.go:130] > BuildDate:      2024-05-13T16:07:33Z
	I0520 13:10:06.047969  616253 command_runner.go:130] > GoVersion:      go1.21.6
	I0520 13:10:06.047973  616253 command_runner.go:130] > Compiler:       gc
	I0520 13:10:06.047978  616253 command_runner.go:130] > Platform:       linux/amd64
	I0520 13:10:06.047982  616253 command_runner.go:130] > Linkmode:       dynamic
	I0520 13:10:06.047987  616253 command_runner.go:130] > BuildTags:      
	I0520 13:10:06.047991  616253 command_runner.go:130] >   containers_image_ostree_stub
	I0520 13:10:06.047995  616253 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0520 13:10:06.047999  616253 command_runner.go:130] >   btrfs_noversion
	I0520 13:10:06.048006  616253 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0520 13:10:06.048013  616253 command_runner.go:130] >   libdm_no_deferred_remove
	I0520 13:10:06.048022  616253 command_runner.go:130] >   seccomp
	I0520 13:10:06.048031  616253 command_runner.go:130] > LDFlags:          unknown
	I0520 13:10:06.048035  616253 command_runner.go:130] > SeccompEnabled:   true
	I0520 13:10:06.048039  616253 command_runner.go:130] > AppArmorEnabled:  false
	I0520 13:10:06.049273  616253 ssh_runner.go:195] Run: crio --version
	I0520 13:10:06.079787  616253 command_runner.go:130] > crio version 1.29.1
	I0520 13:10:06.079821  616253 command_runner.go:130] > Version:        1.29.1
	I0520 13:10:06.079829  616253 command_runner.go:130] > GitCommit:      unknown
	I0520 13:10:06.079836  616253 command_runner.go:130] > GitCommitDate:  unknown
	I0520 13:10:06.079844  616253 command_runner.go:130] > GitTreeState:   clean
	I0520 13:10:06.079852  616253 command_runner.go:130] > BuildDate:      2024-05-13T16:07:33Z
	I0520 13:10:06.079857  616253 command_runner.go:130] > GoVersion:      go1.21.6
	I0520 13:10:06.079861  616253 command_runner.go:130] > Compiler:       gc
	I0520 13:10:06.079866  616253 command_runner.go:130] > Platform:       linux/amd64
	I0520 13:10:06.079869  616253 command_runner.go:130] > Linkmode:       dynamic
	I0520 13:10:06.079874  616253 command_runner.go:130] > BuildTags:      
	I0520 13:10:06.079877  616253 command_runner.go:130] >   containers_image_ostree_stub
	I0520 13:10:06.079882  616253 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0520 13:10:06.079885  616253 command_runner.go:130] >   btrfs_noversion
	I0520 13:10:06.079890  616253 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0520 13:10:06.079894  616253 command_runner.go:130] >   libdm_no_deferred_remove
	I0520 13:10:06.079897  616253 command_runner.go:130] >   seccomp
	I0520 13:10:06.079900  616253 command_runner.go:130] > LDFlags:          unknown
	I0520 13:10:06.079904  616253 command_runner.go:130] > SeccompEnabled:   true
	I0520 13:10:06.079908  616253 command_runner.go:130] > AppArmorEnabled:  false
	I0520 13:10:06.082986  616253 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 13:10:06.085358  616253 main.go:141] libmachine: (functional-694790) Calling .GetIP
	I0520 13:10:06.088257  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:10:06.088624  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:10:06.088655  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:10:06.088867  616253 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 13:10:06.092881  616253 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0520 13:10:06.093020  616253 kubeadm.go:877] updating cluster {Name:functional-694790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:functional-694790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 13:10:06.093188  616253 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:10:06.093270  616253 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:10:06.138840  616253 command_runner.go:130] > {
	I0520 13:10:06.138876  616253 command_runner.go:130] >   "images": [
	I0520 13:10:06.138882  616253 command_runner.go:130] >     {
	I0520 13:10:06.138895  616253 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0520 13:10:06.138903  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.138912  616253 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0520 13:10:06.138918  616253 command_runner.go:130] >       ],
	I0520 13:10:06.138924  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.138935  616253 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0520 13:10:06.138945  616253 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0520 13:10:06.138950  616253 command_runner.go:130] >       ],
	I0520 13:10:06.138958  616253 command_runner.go:130] >       "size": "65291810",
	I0520 13:10:06.138964  616253 command_runner.go:130] >       "uid": null,
	I0520 13:10:06.138970  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.138988  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.138996  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.139001  616253 command_runner.go:130] >     },
	I0520 13:10:06.139006  616253 command_runner.go:130] >     {
	I0520 13:10:06.139019  616253 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0520 13:10:06.139025  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.139034  616253 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0520 13:10:06.139041  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139048  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.139068  616253 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0520 13:10:06.139081  616253 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0520 13:10:06.139086  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139099  616253 command_runner.go:130] >       "size": "31470524",
	I0520 13:10:06.139105  616253 command_runner.go:130] >       "uid": null,
	I0520 13:10:06.139112  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.139118  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.139124  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.139129  616253 command_runner.go:130] >     },
	I0520 13:10:06.139134  616253 command_runner.go:130] >     {
	I0520 13:10:06.139143  616253 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0520 13:10:06.139152  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.139160  616253 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0520 13:10:06.139165  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139171  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.139182  616253 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0520 13:10:06.139193  616253 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0520 13:10:06.139197  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139204  616253 command_runner.go:130] >       "size": "61245718",
	I0520 13:10:06.139210  616253 command_runner.go:130] >       "uid": null,
	I0520 13:10:06.139216  616253 command_runner.go:130] >       "username": "nonroot",
	I0520 13:10:06.139222  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.139229  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.139235  616253 command_runner.go:130] >     },
	I0520 13:10:06.139250  616253 command_runner.go:130] >     {
	I0520 13:10:06.139259  616253 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0520 13:10:06.139265  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.139273  616253 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0520 13:10:06.139282  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139288  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.139298  616253 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0520 13:10:06.139311  616253 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0520 13:10:06.139318  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139322  616253 command_runner.go:130] >       "size": "150779692",
	I0520 13:10:06.139325  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.139329  616253 command_runner.go:130] >         "value": "0"
	I0520 13:10:06.139333  616253 command_runner.go:130] >       },
	I0520 13:10:06.139336  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.139343  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.139348  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.139352  616253 command_runner.go:130] >     },
	I0520 13:10:06.139355  616253 command_runner.go:130] >     {
	I0520 13:10:06.139360  616253 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0520 13:10:06.139365  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.139370  616253 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0520 13:10:06.139375  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139379  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.139402  616253 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0520 13:10:06.139412  616253 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0520 13:10:06.139415  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139419  616253 command_runner.go:130] >       "size": "117601759",
	I0520 13:10:06.139422  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.139426  616253 command_runner.go:130] >         "value": "0"
	I0520 13:10:06.139429  616253 command_runner.go:130] >       },
	I0520 13:10:06.139434  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.139438  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.139444  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.139447  616253 command_runner.go:130] >     },
	I0520 13:10:06.139451  616253 command_runner.go:130] >     {
	I0520 13:10:06.139457  616253 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0520 13:10:06.139478  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.139485  616253 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0520 13:10:06.139489  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139493  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.139500  616253 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0520 13:10:06.139509  616253 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0520 13:10:06.139512  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139517  616253 command_runner.go:130] >       "size": "112170310",
	I0520 13:10:06.139521  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.139525  616253 command_runner.go:130] >         "value": "0"
	I0520 13:10:06.139528  616253 command_runner.go:130] >       },
	I0520 13:10:06.139533  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.139539  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.139543  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.139546  616253 command_runner.go:130] >     },
	I0520 13:10:06.139550  616253 command_runner.go:130] >     {
	I0520 13:10:06.139555  616253 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0520 13:10:06.139562  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.139567  616253 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0520 13:10:06.139570  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139574  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.139581  616253 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0520 13:10:06.139588  616253 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0520 13:10:06.139593  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139600  616253 command_runner.go:130] >       "size": "85933465",
	I0520 13:10:06.139604  616253 command_runner.go:130] >       "uid": null,
	I0520 13:10:06.139607  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.139611  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.139615  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.139618  616253 command_runner.go:130] >     },
	I0520 13:10:06.139623  616253 command_runner.go:130] >     {
	I0520 13:10:06.139629  616253 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0520 13:10:06.139633  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.139640  616253 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0520 13:10:06.139644  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139648  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.139662  616253 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0520 13:10:06.139671  616253 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0520 13:10:06.139675  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139681  616253 command_runner.go:130] >       "size": "63026504",
	I0520 13:10:06.139685  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.139690  616253 command_runner.go:130] >         "value": "0"
	I0520 13:10:06.139693  616253 command_runner.go:130] >       },
	I0520 13:10:06.139697  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.139701  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.139705  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.139711  616253 command_runner.go:130] >     },
	I0520 13:10:06.139714  616253 command_runner.go:130] >     {
	I0520 13:10:06.139720  616253 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0520 13:10:06.139725  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.139729  616253 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0520 13:10:06.139733  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139736  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.139743  616253 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0520 13:10:06.139752  616253 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0520 13:10:06.139757  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139764  616253 command_runner.go:130] >       "size": "750414",
	I0520 13:10:06.139767  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.139771  616253 command_runner.go:130] >         "value": "65535"
	I0520 13:10:06.139775  616253 command_runner.go:130] >       },
	I0520 13:10:06.139779  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.139783  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.139787  616253 command_runner.go:130] >       "pinned": true
	I0520 13:10:06.139789  616253 command_runner.go:130] >     }
	I0520 13:10:06.139792  616253 command_runner.go:130] >   ]
	I0520 13:10:06.139795  616253 command_runner.go:130] > }
	I0520 13:10:06.140007  616253 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:10:06.140022  616253 crio.go:433] Images already preloaded, skipping extraction
	I0520 13:10:06.140087  616253 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:10:06.172124  616253 command_runner.go:130] > {
	I0520 13:10:06.172159  616253 command_runner.go:130] >   "images": [
	I0520 13:10:06.172165  616253 command_runner.go:130] >     {
	I0520 13:10:06.172181  616253 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0520 13:10:06.172189  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.172198  616253 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0520 13:10:06.172204  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172210  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.172224  616253 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0520 13:10:06.172236  616253 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0520 13:10:06.172241  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172249  616253 command_runner.go:130] >       "size": "65291810",
	I0520 13:10:06.172256  616253 command_runner.go:130] >       "uid": null,
	I0520 13:10:06.172263  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.172276  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.172283  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.172291  616253 command_runner.go:130] >     },
	I0520 13:10:06.172296  616253 command_runner.go:130] >     {
	I0520 13:10:06.172305  616253 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0520 13:10:06.172314  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.172322  616253 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0520 13:10:06.172328  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172334  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.172345  616253 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0520 13:10:06.172358  616253 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0520 13:10:06.172363  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172374  616253 command_runner.go:130] >       "size": "31470524",
	I0520 13:10:06.172380  616253 command_runner.go:130] >       "uid": null,
	I0520 13:10:06.172385  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.172391  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.172399  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.172404  616253 command_runner.go:130] >     },
	I0520 13:10:06.172409  616253 command_runner.go:130] >     {
	I0520 13:10:06.172418  616253 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0520 13:10:06.172425  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.172432  616253 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0520 13:10:06.172438  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172449  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.172462  616253 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0520 13:10:06.172473  616253 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0520 13:10:06.172484  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172490  616253 command_runner.go:130] >       "size": "61245718",
	I0520 13:10:06.172500  616253 command_runner.go:130] >       "uid": null,
	I0520 13:10:06.172507  616253 command_runner.go:130] >       "username": "nonroot",
	I0520 13:10:06.172515  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.172521  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.172529  616253 command_runner.go:130] >     },
	I0520 13:10:06.172535  616253 command_runner.go:130] >     {
	I0520 13:10:06.172544  616253 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0520 13:10:06.172552  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.172560  616253 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0520 13:10:06.172569  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172575  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.172587  616253 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0520 13:10:06.172604  616253 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0520 13:10:06.172612  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172619  616253 command_runner.go:130] >       "size": "150779692",
	I0520 13:10:06.172628  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.172635  616253 command_runner.go:130] >         "value": "0"
	I0520 13:10:06.172644  616253 command_runner.go:130] >       },
	I0520 13:10:06.172655  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.172664  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.172670  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.172675  616253 command_runner.go:130] >     },
	I0520 13:10:06.172682  616253 command_runner.go:130] >     {
	I0520 13:10:06.172692  616253 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0520 13:10:06.172701  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.172710  616253 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0520 13:10:06.172720  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172726  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.172740  616253 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0520 13:10:06.172754  616253 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0520 13:10:06.172761  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172769  616253 command_runner.go:130] >       "size": "117601759",
	I0520 13:10:06.172777  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.172783  616253 command_runner.go:130] >         "value": "0"
	I0520 13:10:06.172791  616253 command_runner.go:130] >       },
	I0520 13:10:06.172797  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.172807  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.172813  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.172821  616253 command_runner.go:130] >     },
	I0520 13:10:06.172826  616253 command_runner.go:130] >     {
	I0520 13:10:06.172847  616253 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0520 13:10:06.172855  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.172863  616253 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0520 13:10:06.172872  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172879  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.172895  616253 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0520 13:10:06.172909  616253 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0520 13:10:06.172915  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172923  616253 command_runner.go:130] >       "size": "112170310",
	I0520 13:10:06.172927  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.172933  616253 command_runner.go:130] >         "value": "0"
	I0520 13:10:06.172938  616253 command_runner.go:130] >       },
	I0520 13:10:06.172947  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.172953  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.172960  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.172965  616253 command_runner.go:130] >     },
	I0520 13:10:06.172970  616253 command_runner.go:130] >     {
	I0520 13:10:06.172979  616253 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0520 13:10:06.172993  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.173003  616253 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0520 13:10:06.173009  616253 command_runner.go:130] >       ],
	I0520 13:10:06.173018  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.173029  616253 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0520 13:10:06.173043  616253 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0520 13:10:06.173052  616253 command_runner.go:130] >       ],
	I0520 13:10:06.173059  616253 command_runner.go:130] >       "size": "85933465",
	I0520 13:10:06.173068  616253 command_runner.go:130] >       "uid": null,
	I0520 13:10:06.173077  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.173087  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.173093  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.173101  616253 command_runner.go:130] >     },
	I0520 13:10:06.173106  616253 command_runner.go:130] >     {
	I0520 13:10:06.173122  616253 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0520 13:10:06.173132  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.173139  616253 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0520 13:10:06.173148  616253 command_runner.go:130] >       ],
	I0520 13:10:06.173153  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.173217  616253 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0520 13:10:06.173235  616253 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0520 13:10:06.173264  616253 command_runner.go:130] >       ],
	I0520 13:10:06.173272  616253 command_runner.go:130] >       "size": "63026504",
	I0520 13:10:06.173278  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.173283  616253 command_runner.go:130] >         "value": "0"
	I0520 13:10:06.173288  616253 command_runner.go:130] >       },
	I0520 13:10:06.173294  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.173300  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.173305  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.173311  616253 command_runner.go:130] >     },
	I0520 13:10:06.173316  616253 command_runner.go:130] >     {
	I0520 13:10:06.173325  616253 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0520 13:10:06.173331  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.173338  616253 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0520 13:10:06.173344  616253 command_runner.go:130] >       ],
	I0520 13:10:06.173351  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.173362  616253 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0520 13:10:06.173373  616253 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0520 13:10:06.173378  616253 command_runner.go:130] >       ],
	I0520 13:10:06.173385  616253 command_runner.go:130] >       "size": "750414",
	I0520 13:10:06.173390  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.173396  616253 command_runner.go:130] >         "value": "65535"
	I0520 13:10:06.173403  616253 command_runner.go:130] >       },
	I0520 13:10:06.173413  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.173419  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.173428  616253 command_runner.go:130] >       "pinned": true
	I0520 13:10:06.173436  616253 command_runner.go:130] >     }
	I0520 13:10:06.173442  616253 command_runner.go:130] >   ]
	I0520 13:10:06.173450  616253 command_runner.go:130] > }
	I0520 13:10:06.173632  616253 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:10:06.173655  616253 cache_images.go:84] Images are preloaded, skipping loading
	I0520 13:10:06.173665  616253 kubeadm.go:928] updating node { 192.168.39.165 8441 v1.30.1 crio true true} ...
	I0520 13:10:06.173780  616253 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-694790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:functional-694790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:10:06.173845  616253 ssh_runner.go:195] Run: crio config
	I0520 13:10:06.213463  616253 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0520 13:10:06.213495  616253 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0520 13:10:06.213502  616253 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0520 13:10:06.213507  616253 command_runner.go:130] > #
	I0520 13:10:06.213519  616253 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0520 13:10:06.213529  616253 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0520 13:10:06.213539  616253 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0520 13:10:06.213547  616253 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0520 13:10:06.213551  616253 command_runner.go:130] > # reload'.
	I0520 13:10:06.213556  616253 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0520 13:10:06.213562  616253 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0520 13:10:06.213568  616253 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0520 13:10:06.213578  616253 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0520 13:10:06.213583  616253 command_runner.go:130] > [crio]
	I0520 13:10:06.213591  616253 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0520 13:10:06.213599  616253 command_runner.go:130] > # containers images, in this directory.
	I0520 13:10:06.213608  616253 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0520 13:10:06.213631  616253 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0520 13:10:06.213644  616253 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0520 13:10:06.213651  616253 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0520 13:10:06.213656  616253 command_runner.go:130] > # imagestore = ""
	I0520 13:10:06.213661  616253 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0520 13:10:06.213667  616253 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0520 13:10:06.213676  616253 command_runner.go:130] > storage_driver = "overlay"
	I0520 13:10:06.213685  616253 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0520 13:10:06.213697  616253 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0520 13:10:06.213707  616253 command_runner.go:130] > storage_option = [
	I0520 13:10:06.213823  616253 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0520 13:10:06.213845  616253 command_runner.go:130] > ]
	I0520 13:10:06.213855  616253 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0520 13:10:06.213865  616253 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0520 13:10:06.213872  616253 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0520 13:10:06.213885  616253 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0520 13:10:06.213896  616253 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0520 13:10:06.213903  616253 command_runner.go:130] > # always happen on a node reboot
	I0520 13:10:06.213912  616253 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0520 13:10:06.213930  616253 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0520 13:10:06.213944  616253 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0520 13:10:06.213952  616253 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0520 13:10:06.213961  616253 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0520 13:10:06.213974  616253 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0520 13:10:06.213991  616253 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0520 13:10:06.213999  616253 command_runner.go:130] > # internal_wipe = true
	I0520 13:10:06.214012  616253 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0520 13:10:06.214022  616253 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0520 13:10:06.214029  616253 command_runner.go:130] > # internal_repair = false
	I0520 13:10:06.214037  616253 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0520 13:10:06.214046  616253 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0520 13:10:06.214061  616253 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0520 13:10:06.214074  616253 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0520 13:10:06.214086  616253 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0520 13:10:06.214092  616253 command_runner.go:130] > [crio.api]
	I0520 13:10:06.214103  616253 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0520 13:10:06.214113  616253 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0520 13:10:06.214123  616253 command_runner.go:130] > # IP address on which the stream server will listen.
	I0520 13:10:06.214133  616253 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0520 13:10:06.214144  616253 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0520 13:10:06.214156  616253 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0520 13:10:06.214163  616253 command_runner.go:130] > # stream_port = "0"
	I0520 13:10:06.214172  616253 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0520 13:10:06.214182  616253 command_runner.go:130] > # stream_enable_tls = false
	I0520 13:10:06.214195  616253 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0520 13:10:06.214210  616253 command_runner.go:130] > # stream_idle_timeout = ""
	I0520 13:10:06.214221  616253 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0520 13:10:06.214233  616253 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0520 13:10:06.214239  616253 command_runner.go:130] > # minutes.
	I0520 13:10:06.214248  616253 command_runner.go:130] > # stream_tls_cert = ""
	I0520 13:10:06.214262  616253 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0520 13:10:06.214272  616253 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0520 13:10:06.214282  616253 command_runner.go:130] > # stream_tls_key = ""
	I0520 13:10:06.214292  616253 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0520 13:10:06.214305  616253 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0520 13:10:06.214326  616253 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0520 13:10:06.214336  616253 command_runner.go:130] > # stream_tls_ca = ""
	I0520 13:10:06.214347  616253 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0520 13:10:06.214358  616253 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0520 13:10:06.214369  616253 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0520 13:10:06.214381  616253 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0520 13:10:06.214390  616253 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0520 13:10:06.214403  616253 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0520 13:10:06.214410  616253 command_runner.go:130] > [crio.runtime]
	I0520 13:10:06.214419  616253 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0520 13:10:06.214431  616253 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0520 13:10:06.214441  616253 command_runner.go:130] > # "nofile=1024:2048"
	I0520 13:10:06.214453  616253 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0520 13:10:06.214464  616253 command_runner.go:130] > # default_ulimits = [
	I0520 13:10:06.214469  616253 command_runner.go:130] > # ]
	I0520 13:10:06.214481  616253 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0520 13:10:06.214493  616253 command_runner.go:130] > # no_pivot = false
	I0520 13:10:06.214504  616253 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0520 13:10:06.214518  616253 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0520 13:10:06.214528  616253 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0520 13:10:06.214538  616253 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0520 13:10:06.214549  616253 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0520 13:10:06.214565  616253 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0520 13:10:06.214577  616253 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0520 13:10:06.214588  616253 command_runner.go:130] > # Cgroup setting for conmon
	I0520 13:10:06.214602  616253 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0520 13:10:06.214612  616253 command_runner.go:130] > conmon_cgroup = "pod"
	I0520 13:10:06.214623  616253 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0520 13:10:06.214634  616253 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0520 13:10:06.214648  616253 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0520 13:10:06.214658  616253 command_runner.go:130] > conmon_env = [
	I0520 13:10:06.214668  616253 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0520 13:10:06.214676  616253 command_runner.go:130] > ]
	I0520 13:10:06.214685  616253 command_runner.go:130] > # Additional environment variables to set for all the
	I0520 13:10:06.214695  616253 command_runner.go:130] > # containers. These are overridden if set in the
	I0520 13:10:06.214707  616253 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0520 13:10:06.214716  616253 command_runner.go:130] > # default_env = [
	I0520 13:10:06.214721  616253 command_runner.go:130] > # ]
	I0520 13:10:06.214732  616253 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0520 13:10:06.214743  616253 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0520 13:10:06.214752  616253 command_runner.go:130] > # selinux = false
	I0520 13:10:06.214762  616253 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0520 13:10:06.214775  616253 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0520 13:10:06.214787  616253 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0520 13:10:06.214796  616253 command_runner.go:130] > # seccomp_profile = ""
	I0520 13:10:06.214804  616253 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0520 13:10:06.214816  616253 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0520 13:10:06.214830  616253 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0520 13:10:06.214841  616253 command_runner.go:130] > # which might increase security.
	I0520 13:10:06.214849  616253 command_runner.go:130] > # This option is currently deprecated,
	I0520 13:10:06.214862  616253 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0520 13:10:06.214874  616253 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0520 13:10:06.214888  616253 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0520 13:10:06.214899  616253 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0520 13:10:06.214912  616253 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0520 13:10:06.214925  616253 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0520 13:10:06.214936  616253 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:10:06.214947  616253 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0520 13:10:06.214956  616253 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0520 13:10:06.214967  616253 command_runner.go:130] > # the cgroup blockio controller.
	I0520 13:10:06.214973  616253 command_runner.go:130] > # blockio_config_file = ""
	I0520 13:10:06.214990  616253 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0520 13:10:06.214999  616253 command_runner.go:130] > # blockio parameters.
	I0520 13:10:06.215006  616253 command_runner.go:130] > # blockio_reload = false
	I0520 13:10:06.215018  616253 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0520 13:10:06.215031  616253 command_runner.go:130] > # irqbalance daemon.
	I0520 13:10:06.215043  616253 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0520 13:10:06.215056  616253 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0520 13:10:06.215070  616253 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0520 13:10:06.215081  616253 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0520 13:10:06.215093  616253 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0520 13:10:06.215105  616253 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0520 13:10:06.215117  616253 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:10:06.215128  616253 command_runner.go:130] > # rdt_config_file = ""
	I0520 13:10:06.215136  616253 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0520 13:10:06.215149  616253 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0520 13:10:06.215176  616253 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0520 13:10:06.215186  616253 command_runner.go:130] > # separate_pull_cgroup = ""
	I0520 13:10:06.215201  616253 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0520 13:10:06.215214  616253 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0520 13:10:06.215224  616253 command_runner.go:130] > # will be added.
	I0520 13:10:06.215232  616253 command_runner.go:130] > # default_capabilities = [
	I0520 13:10:06.215238  616253 command_runner.go:130] > # 	"CHOWN",
	I0520 13:10:06.215243  616253 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0520 13:10:06.215252  616253 command_runner.go:130] > # 	"FSETID",
	I0520 13:10:06.215258  616253 command_runner.go:130] > # 	"FOWNER",
	I0520 13:10:06.215267  616253 command_runner.go:130] > # 	"SETGID",
	I0520 13:10:06.215273  616253 command_runner.go:130] > # 	"SETUID",
	I0520 13:10:06.215282  616253 command_runner.go:130] > # 	"SETPCAP",
	I0520 13:10:06.215291  616253 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0520 13:10:06.215300  616253 command_runner.go:130] > # 	"KILL",
	I0520 13:10:06.215308  616253 command_runner.go:130] > # ]
	I0520 13:10:06.215320  616253 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0520 13:10:06.215335  616253 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0520 13:10:06.215344  616253 command_runner.go:130] > # add_inheritable_capabilities = false
	I0520 13:10:06.215354  616253 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0520 13:10:06.215370  616253 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0520 13:10:06.215383  616253 command_runner.go:130] > default_sysctls = [
	I0520 13:10:06.215391  616253 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0520 13:10:06.215399  616253 command_runner.go:130] > ]
	I0520 13:10:06.215408  616253 command_runner.go:130] > # List of devices on the host that a
	I0520 13:10:06.215422  616253 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0520 13:10:06.215431  616253 command_runner.go:130] > # allowed_devices = [
	I0520 13:10:06.215438  616253 command_runner.go:130] > # 	"/dev/fuse",
	I0520 13:10:06.215447  616253 command_runner.go:130] > # ]
	I0520 13:10:06.215456  616253 command_runner.go:130] > # List of additional devices. specified as
	I0520 13:10:06.215469  616253 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0520 13:10:06.215482  616253 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0520 13:10:06.215494  616253 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0520 13:10:06.215505  616253 command_runner.go:130] > # additional_devices = [
	I0520 13:10:06.215511  616253 command_runner.go:130] > # ]
	I0520 13:10:06.215520  616253 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0520 13:10:06.215530  616253 command_runner.go:130] > # cdi_spec_dirs = [
	I0520 13:10:06.215536  616253 command_runner.go:130] > # 	"/etc/cdi",
	I0520 13:10:06.215541  616253 command_runner.go:130] > # 	"/var/run/cdi",
	I0520 13:10:06.215549  616253 command_runner.go:130] > # ]
	I0520 13:10:06.215559  616253 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0520 13:10:06.215578  616253 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0520 13:10:06.215588  616253 command_runner.go:130] > # Defaults to false.
	I0520 13:10:06.215597  616253 command_runner.go:130] > # device_ownership_from_security_context = false
	I0520 13:10:06.215610  616253 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0520 13:10:06.215621  616253 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0520 13:10:06.215630  616253 command_runner.go:130] > # hooks_dir = [
	I0520 13:10:06.215638  616253 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0520 13:10:06.215646  616253 command_runner.go:130] > # ]
	I0520 13:10:06.215654  616253 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0520 13:10:06.215668  616253 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0520 13:10:06.215681  616253 command_runner.go:130] > # its default mounts from the following two files:
	I0520 13:10:06.215687  616253 command_runner.go:130] > #
	I0520 13:10:06.215697  616253 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0520 13:10:06.215710  616253 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0520 13:10:06.215718  616253 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0520 13:10:06.215726  616253 command_runner.go:130] > #
	I0520 13:10:06.215738  616253 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0520 13:10:06.215751  616253 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0520 13:10:06.215766  616253 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0520 13:10:06.215777  616253 command_runner.go:130] > #      only add mounts it finds in this file.
	I0520 13:10:06.215786  616253 command_runner.go:130] > #
	I0520 13:10:06.215795  616253 command_runner.go:130] > # default_mounts_file = ""
	I0520 13:10:06.215806  616253 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0520 13:10:06.215817  616253 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0520 13:10:06.215826  616253 command_runner.go:130] > pids_limit = 1024
	I0520 13:10:06.215837  616253 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0520 13:10:06.215851  616253 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0520 13:10:06.215864  616253 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0520 13:10:06.215881  616253 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0520 13:10:06.215891  616253 command_runner.go:130] > # log_size_max = -1
	I0520 13:10:06.215902  616253 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0520 13:10:06.215912  616253 command_runner.go:130] > # log_to_journald = false
	I0520 13:10:06.215922  616253 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0520 13:10:06.215934  616253 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0520 13:10:06.215946  616253 command_runner.go:130] > # Path to directory for container attach sockets.
	I0520 13:10:06.215958  616253 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0520 13:10:06.215969  616253 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0520 13:10:06.215980  616253 command_runner.go:130] > # bind_mount_prefix = ""
	I0520 13:10:06.215990  616253 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0520 13:10:06.215999  616253 command_runner.go:130] > # read_only = false
	I0520 13:10:06.216008  616253 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0520 13:10:06.216020  616253 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0520 13:10:06.216028  616253 command_runner.go:130] > # live configuration reload.
	I0520 13:10:06.216038  616253 command_runner.go:130] > # log_level = "info"
	I0520 13:10:06.216047  616253 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0520 13:10:06.216059  616253 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:10:06.216068  616253 command_runner.go:130] > # log_filter = ""
	I0520 13:10:06.216077  616253 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0520 13:10:06.216091  616253 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0520 13:10:06.216102  616253 command_runner.go:130] > # separated by comma.
	I0520 13:10:06.216115  616253 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 13:10:06.216122  616253 command_runner.go:130] > # uid_mappings = ""
	I0520 13:10:06.216134  616253 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0520 13:10:06.216147  616253 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0520 13:10:06.216157  616253 command_runner.go:130] > # separated by comma.
	I0520 13:10:06.216170  616253 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 13:10:06.216180  616253 command_runner.go:130] > # gid_mappings = ""
	I0520 13:10:06.216191  616253 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0520 13:10:06.216211  616253 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0520 13:10:06.216225  616253 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0520 13:10:06.216241  616253 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 13:10:06.216249  616253 command_runner.go:130] > # minimum_mappable_uid = -1
	I0520 13:10:06.216260  616253 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0520 13:10:06.216273  616253 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0520 13:10:06.216287  616253 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0520 13:10:06.216303  616253 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 13:10:06.216313  616253 command_runner.go:130] > # minimum_mappable_gid = -1
	I0520 13:10:06.216323  616253 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0520 13:10:06.216336  616253 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0520 13:10:06.216348  616253 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0520 13:10:06.216358  616253 command_runner.go:130] > # ctr_stop_timeout = 30
	I0520 13:10:06.216366  616253 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0520 13:10:06.216378  616253 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0520 13:10:06.216388  616253 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0520 13:10:06.216395  616253 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0520 13:10:06.216405  616253 command_runner.go:130] > drop_infra_ctr = false
	I0520 13:10:06.216415  616253 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0520 13:10:06.216427  616253 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0520 13:10:06.216442  616253 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0520 13:10:06.216452  616253 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0520 13:10:06.216463  616253 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0520 13:10:06.216476  616253 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0520 13:10:06.216486  616253 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0520 13:10:06.216499  616253 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0520 13:10:06.216506  616253 command_runner.go:130] > # shared_cpuset = ""
	I0520 13:10:06.216519  616253 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0520 13:10:06.216531  616253 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0520 13:10:06.216542  616253 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0520 13:10:06.216555  616253 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0520 13:10:06.216566  616253 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0520 13:10:06.216575  616253 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0520 13:10:06.216588  616253 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0520 13:10:06.216600  616253 command_runner.go:130] > # enable_criu_support = false
	I0520 13:10:06.216610  616253 command_runner.go:130] > # Enable/disable the generation of the container,
	I0520 13:10:06.216623  616253 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0520 13:10:06.216634  616253 command_runner.go:130] > # enable_pod_events = false
	I0520 13:10:06.216648  616253 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0520 13:10:06.216661  616253 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0520 13:10:06.216672  616253 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0520 13:10:06.216680  616253 command_runner.go:130] > # default_runtime = "runc"
	I0520 13:10:06.216691  616253 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0520 13:10:06.216703  616253 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0520 13:10:06.216720  616253 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0520 13:10:06.216732  616253 command_runner.go:130] > # creation as a file is not desired either.
	I0520 13:10:06.216746  616253 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0520 13:10:06.216756  616253 command_runner.go:130] > # the hostname is being managed dynamically.
	I0520 13:10:06.216766  616253 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0520 13:10:06.216772  616253 command_runner.go:130] > # ]
	I0520 13:10:06.216785  616253 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0520 13:10:06.216802  616253 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0520 13:10:06.216815  616253 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0520 13:10:06.216826  616253 command_runner.go:130] > # Each entry in the table should follow the format:
	I0520 13:10:06.216833  616253 command_runner.go:130] > #
	I0520 13:10:06.216841  616253 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0520 13:10:06.216851  616253 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0520 13:10:06.216879  616253 command_runner.go:130] > # runtime_type = "oci"
	I0520 13:10:06.216891  616253 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0520 13:10:06.216898  616253 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0520 13:10:06.216905  616253 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0520 13:10:06.216915  616253 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0520 13:10:06.216921  616253 command_runner.go:130] > # monitor_env = []
	I0520 13:10:06.216932  616253 command_runner.go:130] > # privileged_without_host_devices = false
	I0520 13:10:06.216940  616253 command_runner.go:130] > # allowed_annotations = []
	I0520 13:10:06.216952  616253 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0520 13:10:06.216962  616253 command_runner.go:130] > # Where:
	I0520 13:10:06.216970  616253 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0520 13:10:06.216979  616253 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0520 13:10:06.216992  616253 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0520 13:10:06.217004  616253 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0520 13:10:06.217013  616253 command_runner.go:130] > #   in $PATH.
	I0520 13:10:06.217022  616253 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0520 13:10:06.217033  616253 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0520 13:10:06.217045  616253 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0520 13:10:06.217054  616253 command_runner.go:130] > #   state.
	I0520 13:10:06.217064  616253 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0520 13:10:06.217077  616253 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0520 13:10:06.217089  616253 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0520 13:10:06.217102  616253 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0520 13:10:06.217115  616253 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0520 13:10:06.217128  616253 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0520 13:10:06.217140  616253 command_runner.go:130] > #   The currently recognized values are:
	I0520 13:10:06.217153  616253 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0520 13:10:06.217168  616253 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0520 13:10:06.217181  616253 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0520 13:10:06.217191  616253 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0520 13:10:06.217211  616253 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0520 13:10:06.217225  616253 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0520 13:10:06.217239  616253 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0520 13:10:06.217271  616253 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0520 13:10:06.217285  616253 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0520 13:10:06.217296  616253 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0520 13:10:06.217307  616253 command_runner.go:130] > #   deprecated option "conmon".
	I0520 13:10:06.217321  616253 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0520 13:10:06.217332  616253 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0520 13:10:06.217346  616253 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0520 13:10:06.217358  616253 command_runner.go:130] > #   should be moved to the container's cgroup
	I0520 13:10:06.217371  616253 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0520 13:10:06.217381  616253 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0520 13:10:06.217393  616253 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0520 13:10:06.217404  616253 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0520 13:10:06.217413  616253 command_runner.go:130] > #
	I0520 13:10:06.217423  616253 command_runner.go:130] > # Using the seccomp notifier feature:
	I0520 13:10:06.217427  616253 command_runner.go:130] > #
	I0520 13:10:06.217439  616253 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0520 13:10:06.217453  616253 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0520 13:10:06.217461  616253 command_runner.go:130] > #
	I0520 13:10:06.217472  616253 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0520 13:10:06.217485  616253 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0520 13:10:06.217493  616253 command_runner.go:130] > #
	I0520 13:10:06.217501  616253 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0520 13:10:06.217509  616253 command_runner.go:130] > # feature.
	I0520 13:10:06.217513  616253 command_runner.go:130] > #
	I0520 13:10:06.217525  616253 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0520 13:10:06.217539  616253 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0520 13:10:06.217551  616253 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0520 13:10:06.217564  616253 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0520 13:10:06.217576  616253 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0520 13:10:06.217585  616253 command_runner.go:130] > #
	I0520 13:10:06.217598  616253 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0520 13:10:06.217610  616253 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0520 13:10:06.217618  616253 command_runner.go:130] > #
	I0520 13:10:06.217629  616253 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0520 13:10:06.217643  616253 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0520 13:10:06.217651  616253 command_runner.go:130] > #
	I0520 13:10:06.217661  616253 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0520 13:10:06.217673  616253 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0520 13:10:06.217682  616253 command_runner.go:130] > # limitation.
	I0520 13:10:06.217689  616253 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0520 13:10:06.217699  616253 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0520 13:10:06.217706  616253 command_runner.go:130] > runtime_type = "oci"
	I0520 13:10:06.217715  616253 command_runner.go:130] > runtime_root = "/run/runc"
	I0520 13:10:06.217721  616253 command_runner.go:130] > runtime_config_path = ""
	I0520 13:10:06.217730  616253 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0520 13:10:06.217736  616253 command_runner.go:130] > monitor_cgroup = "pod"
	I0520 13:10:06.217746  616253 command_runner.go:130] > monitor_exec_cgroup = ""
	I0520 13:10:06.217754  616253 command_runner.go:130] > monitor_env = [
	I0520 13:10:06.217767  616253 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0520 13:10:06.217774  616253 command_runner.go:130] > ]
	I0520 13:10:06.217781  616253 command_runner.go:130] > privileged_without_host_devices = false
	I0520 13:10:06.217795  616253 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0520 13:10:06.217805  616253 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0520 13:10:06.217817  616253 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0520 13:10:06.217831  616253 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0520 13:10:06.217845  616253 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0520 13:10:06.217858  616253 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0520 13:10:06.217875  616253 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0520 13:10:06.217889  616253 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0520 13:10:06.217900  616253 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0520 13:10:06.217913  616253 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0520 13:10:06.217922  616253 command_runner.go:130] > # Example:
	I0520 13:10:06.217933  616253 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0520 13:10:06.217944  616253 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0520 13:10:06.217954  616253 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0520 13:10:06.217963  616253 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0520 13:10:06.217971  616253 command_runner.go:130] > # cpuset = 0
	I0520 13:10:06.217981  616253 command_runner.go:130] > # cpushares = "0-1"
	I0520 13:10:06.217990  616253 command_runner.go:130] > # Where:
	I0520 13:10:06.218000  616253 command_runner.go:130] > # The workload name is workload-type.
	I0520 13:10:06.218012  616253 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0520 13:10:06.218023  616253 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0520 13:10:06.218031  616253 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0520 13:10:06.218048  616253 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0520 13:10:06.218061  616253 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0520 13:10:06.218071  616253 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0520 13:10:06.218081  616253 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0520 13:10:06.218090  616253 command_runner.go:130] > # Default value is set to true
	I0520 13:10:06.218100  616253 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0520 13:10:06.218111  616253 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0520 13:10:06.218121  616253 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0520 13:10:06.218132  616253 command_runner.go:130] > # Default value is set to 'false'
	I0520 13:10:06.218140  616253 command_runner.go:130] > # disable_hostport_mapping = false
	I0520 13:10:06.218153  616253 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0520 13:10:06.218163  616253 command_runner.go:130] > #
	I0520 13:10:06.218174  616253 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0520 13:10:06.218187  616253 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0520 13:10:06.218203  616253 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0520 13:10:06.218216  616253 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0520 13:10:06.218228  616253 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0520 13:10:06.218237  616253 command_runner.go:130] > [crio.image]
	I0520 13:10:06.218247  616253 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0520 13:10:06.218257  616253 command_runner.go:130] > # default_transport = "docker://"
	I0520 13:10:06.218268  616253 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0520 13:10:06.218280  616253 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0520 13:10:06.218289  616253 command_runner.go:130] > # global_auth_file = ""
	I0520 13:10:06.218299  616253 command_runner.go:130] > # The image used to instantiate infra containers.
	I0520 13:10:06.218311  616253 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:10:06.218322  616253 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0520 13:10:06.218336  616253 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0520 13:10:06.218348  616253 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0520 13:10:06.218358  616253 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:10:06.218367  616253 command_runner.go:130] > # pause_image_auth_file = ""
	I0520 13:10:06.218375  616253 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0520 13:10:06.218387  616253 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0520 13:10:06.218400  616253 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0520 13:10:06.218410  616253 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0520 13:10:06.218419  616253 command_runner.go:130] > # pause_command = "/pause"
	I0520 13:10:06.218430  616253 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0520 13:10:06.218440  616253 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0520 13:10:06.218452  616253 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0520 13:10:06.218464  616253 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0520 13:10:06.218476  616253 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0520 13:10:06.218486  616253 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0520 13:10:06.218496  616253 command_runner.go:130] > # pinned_images = [
	I0520 13:10:06.218501  616253 command_runner.go:130] > # ]
	I0520 13:10:06.218514  616253 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0520 13:10:06.218526  616253 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0520 13:10:06.218539  616253 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0520 13:10:06.218551  616253 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0520 13:10:06.218565  616253 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0520 13:10:06.218574  616253 command_runner.go:130] > # signature_policy = ""
	I0520 13:10:06.218584  616253 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0520 13:10:06.218596  616253 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0520 13:10:06.218612  616253 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0520 13:10:06.218619  616253 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0520 13:10:06.218629  616253 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0520 13:10:06.218635  616253 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0520 13:10:06.218643  616253 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0520 13:10:06.218652  616253 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0520 13:10:06.218657  616253 command_runner.go:130] > # changing them here.
	I0520 13:10:06.218663  616253 command_runner.go:130] > # insecure_registries = [
	I0520 13:10:06.218669  616253 command_runner.go:130] > # ]
	I0520 13:10:06.218677  616253 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0520 13:10:06.218685  616253 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0520 13:10:06.218691  616253 command_runner.go:130] > # image_volumes = "mkdir"
	I0520 13:10:06.218698  616253 command_runner.go:130] > # Temporary directory to use for storing big files
	I0520 13:10:06.218705  616253 command_runner.go:130] > # big_files_temporary_dir = ""
	I0520 13:10:06.218713  616253 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0520 13:10:06.218719  616253 command_runner.go:130] > # CNI plugins.
	I0520 13:10:06.218725  616253 command_runner.go:130] > [crio.network]
	I0520 13:10:06.218734  616253 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0520 13:10:06.218743  616253 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0520 13:10:06.218750  616253 command_runner.go:130] > # cni_default_network = ""
	I0520 13:10:06.218757  616253 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0520 13:10:06.218763  616253 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0520 13:10:06.218771  616253 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0520 13:10:06.218776  616253 command_runner.go:130] > # plugin_dirs = [
	I0520 13:10:06.218782  616253 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0520 13:10:06.218787  616253 command_runner.go:130] > # ]
	I0520 13:10:06.218795  616253 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0520 13:10:06.218799  616253 command_runner.go:130] > [crio.metrics]
	I0520 13:10:06.218806  616253 command_runner.go:130] > # Globally enable or disable metrics support.
	I0520 13:10:06.218811  616253 command_runner.go:130] > enable_metrics = true
	I0520 13:10:06.218818  616253 command_runner.go:130] > # Specify enabled metrics collectors.
	I0520 13:10:06.218830  616253 command_runner.go:130] > # Per default all metrics are enabled.
	I0520 13:10:06.218846  616253 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0520 13:10:06.218860  616253 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0520 13:10:06.218870  616253 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0520 13:10:06.218879  616253 command_runner.go:130] > # metrics_collectors = [
	I0520 13:10:06.218885  616253 command_runner.go:130] > # 	"operations",
	I0520 13:10:06.218896  616253 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0520 13:10:06.218904  616253 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0520 13:10:06.218915  616253 command_runner.go:130] > # 	"operations_errors",
	I0520 13:10:06.218926  616253 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0520 13:10:06.218936  616253 command_runner.go:130] > # 	"image_pulls_by_name",
	I0520 13:10:06.218944  616253 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0520 13:10:06.218953  616253 command_runner.go:130] > # 	"image_pulls_failures",
	I0520 13:10:06.218960  616253 command_runner.go:130] > # 	"image_pulls_successes",
	I0520 13:10:06.218970  616253 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0520 13:10:06.218978  616253 command_runner.go:130] > # 	"image_layer_reuse",
	I0520 13:10:06.218988  616253 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0520 13:10:06.218997  616253 command_runner.go:130] > # 	"containers_oom_total",
	I0520 13:10:06.219020  616253 command_runner.go:130] > # 	"containers_oom",
	I0520 13:10:06.219027  616253 command_runner.go:130] > # 	"processes_defunct",
	I0520 13:10:06.219035  616253 command_runner.go:130] > # 	"operations_total",
	I0520 13:10:06.219044  616253 command_runner.go:130] > # 	"operations_latency_seconds",
	I0520 13:10:06.219051  616253 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0520 13:10:06.219061  616253 command_runner.go:130] > # 	"operations_errors_total",
	I0520 13:10:06.219067  616253 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0520 13:10:06.219076  616253 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0520 13:10:06.219082  616253 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0520 13:10:06.219091  616253 command_runner.go:130] > # 	"image_pulls_success_total",
	I0520 13:10:06.219098  616253 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0520 13:10:06.219104  616253 command_runner.go:130] > # 	"containers_oom_count_total",
	I0520 13:10:06.219114  616253 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0520 13:10:06.219121  616253 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0520 13:10:06.219128  616253 command_runner.go:130] > # ]
	I0520 13:10:06.219139  616253 command_runner.go:130] > # The port on which the metrics server will listen.
	I0520 13:10:06.219148  616253 command_runner.go:130] > # metrics_port = 9090
	I0520 13:10:06.219155  616253 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0520 13:10:06.219164  616253 command_runner.go:130] > # metrics_socket = ""
	I0520 13:10:06.219174  616253 command_runner.go:130] > # The certificate for the secure metrics server.
	I0520 13:10:06.219187  616253 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0520 13:10:06.219203  616253 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0520 13:10:06.219216  616253 command_runner.go:130] > # certificate on any modification event.
	I0520 13:10:06.219226  616253 command_runner.go:130] > # metrics_cert = ""
	I0520 13:10:06.219233  616253 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0520 13:10:06.219244  616253 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0520 13:10:06.219249  616253 command_runner.go:130] > # metrics_key = ""
	I0520 13:10:06.219261  616253 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0520 13:10:06.219271  616253 command_runner.go:130] > [crio.tracing]
	I0520 13:10:06.219279  616253 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0520 13:10:06.219288  616253 command_runner.go:130] > # enable_tracing = false
	I0520 13:10:06.219296  616253 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0520 13:10:06.219305  616253 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0520 13:10:06.219318  616253 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0520 13:10:06.219328  616253 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0520 13:10:06.219335  616253 command_runner.go:130] > # CRI-O NRI configuration.
	I0520 13:10:06.219344  616253 command_runner.go:130] > [crio.nri]
	I0520 13:10:06.219351  616253 command_runner.go:130] > # Globally enable or disable NRI.
	I0520 13:10:06.219361  616253 command_runner.go:130] > # enable_nri = false
	I0520 13:10:06.219368  616253 command_runner.go:130] > # NRI socket to listen on.
	I0520 13:10:06.219378  616253 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0520 13:10:06.219389  616253 command_runner.go:130] > # NRI plugin directory to use.
	I0520 13:10:06.219397  616253 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0520 13:10:06.219407  616253 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0520 13:10:06.219416  616253 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0520 13:10:06.219426  616253 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0520 13:10:06.219435  616253 command_runner.go:130] > # nri_disable_connections = false
	I0520 13:10:06.219442  616253 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0520 13:10:06.219453  616253 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0520 13:10:06.219461  616253 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0520 13:10:06.219471  616253 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0520 13:10:06.219483  616253 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0520 13:10:06.219492  616253 command_runner.go:130] > [crio.stats]
	I0520 13:10:06.219505  616253 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0520 13:10:06.219516  616253 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0520 13:10:06.219529  616253 command_runner.go:130] > # stats_collection_period = 0
	I0520 13:10:06.219560  616253 command_runner.go:130] ! time="2024-05-20 13:10:06.195664658Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0520 13:10:06.219573  616253 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0520 13:10:06.219696  616253 cni.go:84] Creating CNI manager for ""
	I0520 13:10:06.219707  616253 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 13:10:06.219726  616253 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 13:10:06.219753  616253 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8441 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-694790 NodeName:functional-694790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 13:10:06.219910  616253 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-694790"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 13:10:06.219975  616253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 13:10:06.229517  616253 command_runner.go:130] > kubeadm
	I0520 13:10:06.229538  616253 command_runner.go:130] > kubectl
	I0520 13:10:06.229542  616253 command_runner.go:130] > kubelet
	I0520 13:10:06.229767  616253 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 13:10:06.229851  616253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 13:10:06.239554  616253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 13:10:06.257771  616253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:10:06.276416  616253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0520 13:10:06.294406  616253 ssh_runner.go:195] Run: grep 192.168.39.165	control-plane.minikube.internal$ /etc/hosts
	I0520 13:10:06.298384  616253 command_runner.go:130] > 192.168.39.165	control-plane.minikube.internal
	I0520 13:10:06.298466  616253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:10:06.444516  616253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:10:06.459825  616253 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790 for IP: 192.168.39.165
	I0520 13:10:06.460002  616253 certs.go:194] generating shared ca certs ...
	I0520 13:10:06.460062  616253 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:10:06.460266  616253 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 13:10:06.460318  616253 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 13:10:06.460335  616253 certs.go:256] generating profile certs ...
	I0520 13:10:06.460469  616253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.key
	I0520 13:10:06.460554  616253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/apiserver.key.d272fba6
	I0520 13:10:06.460608  616253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/proxy-client.key
	I0520 13:10:06.460623  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 13:10:06.460641  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 13:10:06.460661  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 13:10:06.460679  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 13:10:06.460698  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 13:10:06.460731  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 13:10:06.460750  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 13:10:06.460767  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 13:10:06.460838  616253 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem (1338 bytes)
	W0520 13:10:06.460882  616253 certs.go:480] ignoring /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867_empty.pem, impossibly tiny 0 bytes
	I0520 13:10:06.460897  616253 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 13:10:06.460932  616253 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 13:10:06.460963  616253 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:10:06.460996  616253 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 13:10:06.461055  616253 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:10:06.461093  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /usr/share/ca-certificates/6098672.pem
	I0520 13:10:06.461113  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:10:06.461135  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem -> /usr/share/ca-certificates/609867.pem
	I0520 13:10:06.461744  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:10:06.485749  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:10:06.508423  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:10:06.531309  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 13:10:06.554904  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 13:10:06.577419  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 13:10:06.599494  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:10:06.622370  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 13:10:06.645543  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /usr/share/ca-certificates/6098672.pem (1708 bytes)
	I0520 13:10:06.668978  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:10:06.691916  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem --> /usr/share/ca-certificates/609867.pem (1338 bytes)
	I0520 13:10:06.713971  616253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 13:10:06.729459  616253 ssh_runner.go:195] Run: openssl version
	I0520 13:10:06.735000  616253 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 13:10:06.735381  616253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6098672.pem && ln -fs /usr/share/ca-certificates/6098672.pem /etc/ssl/certs/6098672.pem"
	I0520 13:10:06.753128  616253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6098672.pem
	I0520 13:10:06.757508  616253 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 13:10:06.757733  616253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 13:10:06.757807  616253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6098672.pem
	I0520 13:10:06.762942  616253 command_runner.go:130] > 3ec20f2e
	I0520 13:10:06.763053  616253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6098672.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:10:06.771998  616253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:10:06.781768  616253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:10:06.785813  616253 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:10:06.785892  616253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:10:06.785945  616253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:10:06.790992  616253 command_runner.go:130] > b5213941
	I0520 13:10:06.791066  616253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:10:06.799691  616253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/609867.pem && ln -fs /usr/share/ca-certificates/609867.pem /etc/ssl/certs/609867.pem"
	I0520 13:10:06.809997  616253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/609867.pem
	I0520 13:10:06.814233  616253 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 13:10:06.814266  616253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 13:10:06.814302  616253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/609867.pem
	I0520 13:10:06.819637  616253 command_runner.go:130] > 51391683
	I0520 13:10:06.819717  616253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/609867.pem /etc/ssl/certs/51391683.0"
	I0520 13:10:06.828643  616253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:10:06.832994  616253 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:10:06.833025  616253 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0520 13:10:06.833031  616253 command_runner.go:130] > Device: 253,1	Inode: 2104342     Links: 1
	I0520 13:10:06.833038  616253 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 13:10:06.833047  616253 command_runner.go:130] > Access: 2024-05-20 13:07:50.431290266 +0000
	I0520 13:10:06.833056  616253 command_runner.go:130] > Modify: 2024-05-20 13:07:50.431290266 +0000
	I0520 13:10:06.833063  616253 command_runner.go:130] > Change: 2024-05-20 13:07:50.431290266 +0000
	I0520 13:10:06.833073  616253 command_runner.go:130] >  Birth: 2024-05-20 13:07:50.431290266 +0000
	I0520 13:10:06.833179  616253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 13:10:06.838619  616253 command_runner.go:130] > Certificate will not expire
	I0520 13:10:06.838730  616253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 13:10:06.844033  616253 command_runner.go:130] > Certificate will not expire
	I0520 13:10:06.844171  616253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 13:10:06.849741  616253 command_runner.go:130] > Certificate will not expire
	I0520 13:10:06.849814  616253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 13:10:06.855149  616253 command_runner.go:130] > Certificate will not expire
	I0520 13:10:06.855408  616253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 13:10:06.860980  616253 command_runner.go:130] > Certificate will not expire
	I0520 13:10:06.861051  616253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 13:10:06.866652  616253 command_runner.go:130] > Certificate will not expire
	I0520 13:10:06.866710  616253 kubeadm.go:391] StartCluster: {Name:functional-694790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:functional-694790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:10:06.866783  616253 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 13:10:06.866833  616253 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 13:10:06.902965  616253 command_runner.go:130] > 0c62efa8614b8a9332b7854cf566f0019f7bf9769580bec1fc8ca8128436ef71
	I0520 13:10:06.902989  616253 command_runner.go:130] > 994be8edcd6dc8ef46d99f9edf9002a4126475365cff14f88d3a04d621a1327d
	I0520 13:10:06.902995  616253 command_runner.go:130] > 2f0690832c93fdf31579092306b21563d063c9fe1bffdb5a8ec45484f5235b44
	I0520 13:10:06.903044  616253 command_runner.go:130] > 725502a8fab5f94c88c7a65658b3916b6b807344fa64f8ef380324d845067145
	I0520 13:10:06.903069  616253 command_runner.go:130] > 8bc22f563655c225e4d4887e10763bb0b1eca39ab8d0d7601d82b669e43e689f
	I0520 13:10:06.903081  616253 command_runner.go:130] > 8178b9248ec3d3dadab1b6998390cda60be94dd606fc9f3bd00b3ec46fdaba5d
	I0520 13:10:06.903093  616253 command_runner.go:130] > fea004b7d1708081f1e62b29a80ceb9f5be30b4a2bec0951e79fe26d77d3e428
	I0520 13:10:06.903107  616253 command_runner.go:130] > bd8dcb4e3314afac1a5e1eee4e96ff11f17bd70ba4d80d3dc5062377f01dbcf1
	I0520 13:10:06.903119  616253 command_runner.go:130] > 5353f272c983196bcbefbb85bb7a426173f7fa7ce23104785380f243e75fee32
	I0520 13:10:06.903131  616253 command_runner.go:130] > 88bc645ba8dc33e1953a1f03a31532c2fe5189427addb21d9aac04febf162b2c
	I0520 13:10:06.903180  616253 command_runner.go:130] > 6d1d6c466d0706c03ec1ca14158c2006d6050ab4e4d67c8894f2b88559394387
	I0520 13:10:06.903306  616253 command_runner.go:130] > f5a1bfc0038588fe036fb0a09d1f1319f63830dceb48128bcd013cf1daba9feb
	I0520 13:10:06.904853  616253 cri.go:89] found id: "0c62efa8614b8a9332b7854cf566f0019f7bf9769580bec1fc8ca8128436ef71"
	I0520 13:10:06.904870  616253 cri.go:89] found id: "994be8edcd6dc8ef46d99f9edf9002a4126475365cff14f88d3a04d621a1327d"
	I0520 13:10:06.904876  616253 cri.go:89] found id: "2f0690832c93fdf31579092306b21563d063c9fe1bffdb5a8ec45484f5235b44"
	I0520 13:10:06.904880  616253 cri.go:89] found id: "725502a8fab5f94c88c7a65658b3916b6b807344fa64f8ef380324d845067145"
	I0520 13:10:06.904884  616253 cri.go:89] found id: "8bc22f563655c225e4d4887e10763bb0b1eca39ab8d0d7601d82b669e43e689f"
	I0520 13:10:06.904889  616253 cri.go:89] found id: "8178b9248ec3d3dadab1b6998390cda60be94dd606fc9f3bd00b3ec46fdaba5d"
	I0520 13:10:06.904893  616253 cri.go:89] found id: "fea004b7d1708081f1e62b29a80ceb9f5be30b4a2bec0951e79fe26d77d3e428"
	I0520 13:10:06.904896  616253 cri.go:89] found id: "bd8dcb4e3314afac1a5e1eee4e96ff11f17bd70ba4d80d3dc5062377f01dbcf1"
	I0520 13:10:06.904900  616253 cri.go:89] found id: "5353f272c983196bcbefbb85bb7a426173f7fa7ce23104785380f243e75fee32"
	I0520 13:10:06.904911  616253 cri.go:89] found id: "88bc645ba8dc33e1953a1f03a31532c2fe5189427addb21d9aac04febf162b2c"
	I0520 13:10:06.904918  616253 cri.go:89] found id: "6d1d6c466d0706c03ec1ca14158c2006d6050ab4e4d67c8894f2b88559394387"
	I0520 13:10:06.904924  616253 cri.go:89] found id: "f5a1bfc0038588fe036fb0a09d1f1319f63830dceb48128bcd013cf1daba9feb"
	I0520 13:10:06.904929  616253 cri.go:89] found id: ""
	I0520 13:10:06.904982  616253 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-694790 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 17m42.67200021s for "functional-694790" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-694790 -n functional-694790
helpers_test.go:244: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-694790 logs -n 25: (1.044846521s)
helpers_test.go:252: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | addons-840762 addons disable   | addons-840762     | jenkins | v1.33.1 | 20 May 24 13:01 UTC | 20 May 24 13:01 UTC |
	|         | ingress-dns --alsologtostderr  |                   |         |         |                     |                     |
	|         | -v=1                           |                   |         |         |                     |                     |
	| addons  | addons-840762 addons disable   | addons-840762     | jenkins | v1.33.1 | 20 May 24 13:01 UTC | 20 May 24 13:01 UTC |
	|         | ingress --alsologtostderr -v=1 |                   |         |         |                     |                     |
	| addons  | addons-840762 addons           | addons-840762     | jenkins | v1.33.1 | 20 May 24 13:03 UTC | 20 May 24 13:03 UTC |
	|         | disable metrics-server         |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                   |         |         |                     |                     |
	| addons  | addons-840762 addons disable   | addons-840762     | jenkins | v1.33.1 | 20 May 24 13:03 UTC | 20 May 24 13:03 UTC |
	|         | gcp-auth --alsologtostderr     |                   |         |         |                     |                     |
	|         | -v=1                           |                   |         |         |                     |                     |
	| stop    | -p addons-840762               | addons-840762     | jenkins | v1.33.1 | 20 May 24 13:03 UTC |                     |
	| addons  | enable dashboard -p            | addons-840762     | jenkins | v1.33.1 | 20 May 24 13:05 UTC |                     |
	|         | addons-840762                  |                   |         |         |                     |                     |
	| addons  | disable dashboard -p           | addons-840762     | jenkins | v1.33.1 | 20 May 24 13:06 UTC |                     |
	|         | addons-840762                  |                   |         |         |                     |                     |
	| addons  | disable gvisor -p              | addons-840762     | jenkins | v1.33.1 | 20 May 24 13:06 UTC |                     |
	|         | addons-840762                  |                   |         |         |                     |                     |
	| delete  | -p addons-840762               | addons-840762     | jenkins | v1.33.1 | 20 May 24 13:06 UTC | 20 May 24 13:06 UTC |
	| start   | -p nospam-609784 -n=1          | nospam-609784     | jenkins | v1.33.1 | 20 May 24 13:06 UTC | 20 May 24 13:07 UTC |
	|         | --memory=2250 --wait=false     |                   |         |         |                     |                     |
	|         | --log_dir=/tmp/nospam-609784   |                   |         |         |                     |                     |
	|         | --driver=kvm2                  |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | nospam-609784 --log_dir        | nospam-609784     | jenkins | v1.33.1 | 20 May 24 13:07 UTC |                     |
	|         | /tmp/nospam-609784 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-609784 --log_dir        | nospam-609784     | jenkins | v1.33.1 | 20 May 24 13:07 UTC |                     |
	|         | /tmp/nospam-609784 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-609784 --log_dir        | nospam-609784     | jenkins | v1.33.1 | 20 May 24 13:07 UTC |                     |
	|         | /tmp/nospam-609784 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| pause   | nospam-609784 --log_dir        | nospam-609784     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /tmp/nospam-609784 pause       |                   |         |         |                     |                     |
	| pause   | nospam-609784 --log_dir        | nospam-609784     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /tmp/nospam-609784 pause       |                   |         |         |                     |                     |
	| pause   | nospam-609784 --log_dir        | nospam-609784     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /tmp/nospam-609784 pause       |                   |         |         |                     |                     |
	| unpause | nospam-609784 --log_dir        | nospam-609784     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /tmp/nospam-609784 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-609784 --log_dir        | nospam-609784     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /tmp/nospam-609784 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-609784 --log_dir        | nospam-609784     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /tmp/nospam-609784 unpause     |                   |         |         |                     |                     |
	| stop    | nospam-609784 --log_dir        | nospam-609784     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /tmp/nospam-609784 stop        |                   |         |         |                     |                     |
	| stop    | nospam-609784 --log_dir        | nospam-609784     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /tmp/nospam-609784 stop        |                   |         |         |                     |                     |
	| stop    | nospam-609784 --log_dir        | nospam-609784     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /tmp/nospam-609784 stop        |                   |         |         |                     |                     |
	| delete  | -p nospam-609784               | nospam-609784     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	| start   | -p functional-694790           | functional-694790 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:08 UTC |
	|         | --memory=4000                  |                   |         |         |                     |                     |
	|         | --apiserver-port=8441          |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | -p functional-694790           | functional-694790 | jenkins | v1.33.1 | 20 May 24 13:08 UTC |                     |
	|         | --alsologtostderr -v=8         |                   |         |         |                     |                     |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 13:08:26
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 13:08:26.734453  616253 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:08:26.734693  616253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:08:26.734702  616253 out.go:304] Setting ErrFile to fd 2...
	I0520 13:08:26.734706  616253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:08:26.734906  616253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:08:26.735440  616253 out.go:298] Setting JSON to false
	I0520 13:08:26.736363  616253 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10247,"bootTime":1716200260,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 13:08:26.736424  616253 start.go:139] virtualization: kvm guest
	I0520 13:08:26.739826  616253 out.go:177] * [functional-694790] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 13:08:26.742194  616253 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 13:08:26.744238  616253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 13:08:26.742257  616253 notify.go:220] Checking for updates...
	I0520 13:08:26.746662  616253 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 13:08:26.749083  616253 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:08:26.751463  616253 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 13:08:26.753716  616253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 13:08:26.756362  616253 config.go:182] Loaded profile config "functional-694790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:08:26.756482  616253 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 13:08:26.756919  616253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:08:26.756982  616253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:08:26.772413  616253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35281
	I0520 13:08:26.772986  616253 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:08:26.773584  616253 main.go:141] libmachine: Using API Version  1
	I0520 13:08:26.773611  616253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:08:26.774051  616253 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:08:26.774291  616253 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:08:26.812111  616253 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 13:08:26.814376  616253 start.go:297] selected driver: kvm2
	I0520 13:08:26.814397  616253 start.go:901] validating driver "kvm2" against &{Name:functional-694790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-694790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:08:26.814513  616253 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 13:08:26.814855  616253 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:08:26.814958  616253 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 13:08:26.830243  616253 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 13:08:26.831035  616253 cni.go:84] Creating CNI manager for ""
	I0520 13:08:26.831050  616253 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 13:08:26.831117  616253 start.go:340] cluster config:
	{Name:functional-694790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-694790 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:08:26.831235  616253 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:08:26.833917  616253 out.go:177] * Starting "functional-694790" primary control-plane node in "functional-694790" cluster
	I0520 13:08:26.836341  616253 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:08:26.836386  616253 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 13:08:26.836400  616253 cache.go:56] Caching tarball of preloaded images
	I0520 13:08:26.836504  616253 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:08:26.836520  616253 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 13:08:26.836629  616253 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/config.json ...
	I0520 13:08:26.836851  616253 start.go:360] acquireMachinesLock for functional-694790: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:08:26.836903  616253 start.go:364] duration metric: took 30.525µs to acquireMachinesLock for "functional-694790"
	I0520 13:08:26.836924  616253 start.go:96] Skipping create...Using existing machine configuration
	I0520 13:08:26.836933  616253 fix.go:54] fixHost starting: 
	I0520 13:08:26.837222  616253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:08:26.837311  616253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:08:26.853563  616253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36961
	I0520 13:08:26.854176  616253 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:08:26.854715  616253 main.go:141] libmachine: Using API Version  1
	I0520 13:08:26.854742  616253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:08:26.855108  616253 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:08:26.855362  616253 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:08:26.855569  616253 main.go:141] libmachine: (functional-694790) Calling .GetState
	I0520 13:08:26.857420  616253 fix.go:112] recreateIfNeeded on functional-694790: state=Running err=<nil>
	W0520 13:08:26.857465  616253 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 13:08:26.860286  616253 out.go:177] * Updating the running kvm2 "functional-694790" VM ...
	I0520 13:08:26.862456  616253 machine.go:94] provisionDockerMachine start ...
	I0520 13:08:26.862479  616253 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:08:26.862698  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:26.865144  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:26.865615  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:26.865647  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:26.865790  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:26.865983  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:26.866151  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:26.866285  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:26.866437  616253 main.go:141] libmachine: Using SSH client type: native
	I0520 13:08:26.866613  616253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0520 13:08:26.866623  616253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 13:08:26.973470  616253 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-694790
	
	I0520 13:08:26.973500  616253 main.go:141] libmachine: (functional-694790) Calling .GetMachineName
	I0520 13:08:26.973764  616253 buildroot.go:166] provisioning hostname "functional-694790"
	I0520 13:08:26.973782  616253 main.go:141] libmachine: (functional-694790) Calling .GetMachineName
	I0520 13:08:26.973994  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:26.977222  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:26.977605  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:26.977647  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:26.977834  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:26.978062  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:26.978271  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:26.978423  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:26.978586  616253 main.go:141] libmachine: Using SSH client type: native
	I0520 13:08:26.978749  616253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0520 13:08:26.978761  616253 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-694790 && echo "functional-694790" | sudo tee /etc/hostname
	I0520 13:08:27.099833  616253 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-694790
	
	I0520 13:08:27.099873  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:27.102963  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.103308  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:27.103346  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.103529  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:27.103755  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:27.103950  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:27.104145  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:27.104302  616253 main.go:141] libmachine: Using SSH client type: native
	I0520 13:08:27.104474  616253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0520 13:08:27.104491  616253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-694790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-694790/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-694790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 13:08:27.209794  616253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:08:27.209827  616253 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 13:08:27.209849  616253 buildroot.go:174] setting up certificates
	I0520 13:08:27.209857  616253 provision.go:84] configureAuth start
	I0520 13:08:27.209869  616253 main.go:141] libmachine: (functional-694790) Calling .GetMachineName
	I0520 13:08:27.210191  616253 main.go:141] libmachine: (functional-694790) Calling .GetIP
	I0520 13:08:27.213131  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.213532  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:27.213564  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.213740  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:27.216163  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.216506  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:27.216540  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.216638  616253 provision.go:143] copyHostCerts
	I0520 13:08:27.216804  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:08:27.216852  616253 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 13:08:27.216873  616253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:08:27.216955  616253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 13:08:27.217072  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:08:27.217094  616253 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 13:08:27.217098  616253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:08:27.217124  616253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 13:08:27.217175  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:08:27.217195  616253 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 13:08:27.217202  616253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:08:27.217226  616253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 13:08:27.217329  616253 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.functional-694790 san=[127.0.0.1 192.168.39.165 functional-694790 localhost minikube]
	I0520 13:08:27.347990  616253 provision.go:177] copyRemoteCerts
	I0520 13:08:27.348054  616253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 13:08:27.348080  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:27.351038  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.351400  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:27.351438  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.351599  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:27.351744  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:27.351879  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:27.352065  616253 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/functional-694790/id_rsa Username:docker}
	I0520 13:08:27.440528  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 13:08:27.440603  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0520 13:08:27.469409  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 13:08:27.469498  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 13:08:27.492300  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 13:08:27.492393  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 13:08:27.516420  616253 provision.go:87] duration metric: took 306.549523ms to configureAuth
	I0520 13:08:27.516454  616253 buildroot.go:189] setting minikube options for container-runtime
	I0520 13:08:27.516636  616253 config.go:182] Loaded profile config "functional-694790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:08:27.516739  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:27.519724  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.520079  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:27.520114  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:27.520212  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:27.520556  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:27.520757  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:27.520945  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:27.521178  616253 main.go:141] libmachine: Using SSH client type: native
	I0520 13:08:27.521400  616253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0520 13:08:27.521418  616253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 13:08:33.122283  616253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 13:08:33.122316  616253 machine.go:97] duration metric: took 6.259843677s to provisionDockerMachine
	I0520 13:08:33.122331  616253 start.go:293] postStartSetup for "functional-694790" (driver="kvm2")
	I0520 13:08:33.122343  616253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 13:08:33.122362  616253 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:08:33.122709  616253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 13:08:33.122735  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:33.125553  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.125975  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:33.126011  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.126167  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:33.126381  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:33.126601  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:33.126757  616253 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/functional-694790/id_rsa Username:docker}
	I0520 13:08:33.212290  616253 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 13:08:33.216257  616253 command_runner.go:130] > NAME=Buildroot
	I0520 13:08:33.216287  616253 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 13:08:33.216291  616253 command_runner.go:130] > ID=buildroot
	I0520 13:08:33.216295  616253 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 13:08:33.216301  616253 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 13:08:33.216338  616253 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 13:08:33.216359  616253 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 13:08:33.216433  616253 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 13:08:33.216559  616253 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 13:08:33.216572  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /etc/ssl/certs/6098672.pem
	I0520 13:08:33.216635  616253 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/test/nested/copy/609867/hosts -> hosts in /etc/test/nested/copy/609867
	I0520 13:08:33.216644  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/test/nested/copy/609867/hosts -> /etc/test/nested/copy/609867/hosts
	I0520 13:08:33.216678  616253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/609867
	I0520 13:08:33.226292  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:08:33.249802  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/test/nested/copy/609867/hosts --> /etc/test/nested/copy/609867/hosts (40 bytes)
	I0520 13:08:33.272580  616253 start.go:296] duration metric: took 150.233991ms for postStartSetup
	I0520 13:08:33.272636  616253 fix.go:56] duration metric: took 6.435701648s for fixHost
	I0520 13:08:33.272683  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:33.275729  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.276119  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:33.276158  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.276313  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:33.276554  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:33.276736  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:33.276936  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:33.277228  616253 main.go:141] libmachine: Using SSH client type: native
	I0520 13:08:33.277439  616253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0520 13:08:33.277450  616253 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 13:08:33.381944  616253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716210513.374511326
	
	I0520 13:08:33.381978  616253 fix.go:216] guest clock: 1716210513.374511326
	I0520 13:08:33.381989  616253 fix.go:229] Guest: 2024-05-20 13:08:33.374511326 +0000 UTC Remote: 2024-05-20 13:08:33.272641604 +0000 UTC m=+6.572255559 (delta=101.869722ms)
	I0520 13:08:33.382022  616253 fix.go:200] guest clock delta is within tolerance: 101.869722ms
	I0520 13:08:33.382031  616253 start.go:83] releasing machines lock for "functional-694790", held for 6.54511315s
	I0520 13:08:33.382067  616253 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:08:33.382358  616253 main.go:141] libmachine: (functional-694790) Calling .GetIP
	I0520 13:08:33.384955  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.385314  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:33.385351  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.385466  616253 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:08:33.386084  616253 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:08:33.386262  616253 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:08:33.386329  616253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 13:08:33.386379  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:33.386531  616253 ssh_runner.go:195] Run: cat /version.json
	I0520 13:08:33.386562  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
	I0520 13:08:33.389148  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.389509  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:33.389541  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.389716  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:33.389747  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.389921  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:33.390092  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:33.390151  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:08:33.390177  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:08:33.390265  616253 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/functional-694790/id_rsa Username:docker}
	I0520 13:08:33.390391  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
	I0520 13:08:33.390555  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
	I0520 13:08:33.390751  616253 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
	I0520 13:08:33.390904  616253 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/functional-694790/id_rsa Username:docker}
	I0520 13:08:33.498976  616253 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 13:08:33.499045  616253 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	W0520 13:08:33.499190  616253 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 13:08:33.499298  616253 ssh_runner.go:195] Run: systemctl --version
	I0520 13:08:33.505479  616253 command_runner.go:130] > systemd 252 (252)
	I0520 13:08:33.505527  616253 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 13:08:33.505616  616253 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 13:08:33.902962  616253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 13:08:33.967723  616253 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 13:08:33.968221  616253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 13:08:33.968307  616253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 13:08:34.025380  616253 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 13:08:34.025424  616253 start.go:494] detecting cgroup driver to use...
	I0520 13:08:34.025507  616253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 13:08:34.120247  616253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 13:08:34.176301  616253 docker.go:217] disabling cri-docker service (if available) ...
	I0520 13:08:34.176389  616253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 13:08:34.210162  616253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 13:08:34.237578  616253 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 13:08:34.477160  616253 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 13:08:34.701226  616253 docker.go:233] disabling docker service ...
	I0520 13:08:34.701358  616253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 13:08:34.754859  616253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 13:08:34.777764  616253 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 13:08:34.959596  616253 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 13:08:35.149350  616253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 13:08:35.163721  616253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 13:08:35.185396  616253 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0520 13:08:35.185566  616253 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 13:08:35.185642  616253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:08:35.198694  616253 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 13:08:35.198788  616253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:08:35.210858  616253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:08:35.221722  616253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:08:35.234110  616253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 13:08:35.245952  616253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:08:35.256480  616253 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:08:35.267035  616253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:08:35.277016  616253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 13:08:35.286106  616253 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 13:08:35.286469  616253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 13:08:35.296318  616253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:08:35.468962  616253 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 13:10:05.972393  616253 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.503367989s)
	I0520 13:10:05.972450  616253 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 13:10:05.972520  616253 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 13:10:05.977889  616253 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0520 13:10:05.977918  616253 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 13:10:05.977926  616253 command_runner.go:130] > Device: 0,22	Inode: 1640        Links: 1
	I0520 13:10:05.977937  616253 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 13:10:05.977942  616253 command_runner.go:130] > Access: 2024-05-20 13:10:05.747954310 +0000
	I0520 13:10:05.977951  616253 command_runner.go:130] > Modify: 2024-05-20 13:10:05.747954310 +0000
	I0520 13:10:05.977958  616253 command_runner.go:130] > Change: 2024-05-20 13:10:05.747954310 +0000
	I0520 13:10:05.977964  616253 command_runner.go:130] >  Birth: -
	I0520 13:10:05.977997  616253 start.go:562] Will wait 60s for crictl version
	I0520 13:10:05.978066  616253 ssh_runner.go:195] Run: which crictl
	I0520 13:10:05.981829  616253 command_runner.go:130] > /usr/bin/crictl
	I0520 13:10:05.981911  616253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 13:10:06.019728  616253 command_runner.go:130] > Version:  0.1.0
	I0520 13:10:06.019757  616253 command_runner.go:130] > RuntimeName:  cri-o
	I0520 13:10:06.019763  616253 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0520 13:10:06.019771  616253 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 13:10:06.020771  616253 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 13:10:06.020873  616253 ssh_runner.go:195] Run: crio --version
	I0520 13:10:06.047914  616253 command_runner.go:130] > crio version 1.29.1
	I0520 13:10:06.047945  616253 command_runner.go:130] > Version:        1.29.1
	I0520 13:10:06.047951  616253 command_runner.go:130] > GitCommit:      unknown
	I0520 13:10:06.047955  616253 command_runner.go:130] > GitCommitDate:  unknown
	I0520 13:10:06.047959  616253 command_runner.go:130] > GitTreeState:   clean
	I0520 13:10:06.047965  616253 command_runner.go:130] > BuildDate:      2024-05-13T16:07:33Z
	I0520 13:10:06.047969  616253 command_runner.go:130] > GoVersion:      go1.21.6
	I0520 13:10:06.047973  616253 command_runner.go:130] > Compiler:       gc
	I0520 13:10:06.047978  616253 command_runner.go:130] > Platform:       linux/amd64
	I0520 13:10:06.047982  616253 command_runner.go:130] > Linkmode:       dynamic
	I0520 13:10:06.047987  616253 command_runner.go:130] > BuildTags:      
	I0520 13:10:06.047991  616253 command_runner.go:130] >   containers_image_ostree_stub
	I0520 13:10:06.047995  616253 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0520 13:10:06.047999  616253 command_runner.go:130] >   btrfs_noversion
	I0520 13:10:06.048006  616253 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0520 13:10:06.048013  616253 command_runner.go:130] >   libdm_no_deferred_remove
	I0520 13:10:06.048022  616253 command_runner.go:130] >   seccomp
	I0520 13:10:06.048031  616253 command_runner.go:130] > LDFlags:          unknown
	I0520 13:10:06.048035  616253 command_runner.go:130] > SeccompEnabled:   true
	I0520 13:10:06.048039  616253 command_runner.go:130] > AppArmorEnabled:  false
	I0520 13:10:06.049273  616253 ssh_runner.go:195] Run: crio --version
	I0520 13:10:06.079787  616253 command_runner.go:130] > crio version 1.29.1
	I0520 13:10:06.079821  616253 command_runner.go:130] > Version:        1.29.1
	I0520 13:10:06.079829  616253 command_runner.go:130] > GitCommit:      unknown
	I0520 13:10:06.079836  616253 command_runner.go:130] > GitCommitDate:  unknown
	I0520 13:10:06.079844  616253 command_runner.go:130] > GitTreeState:   clean
	I0520 13:10:06.079852  616253 command_runner.go:130] > BuildDate:      2024-05-13T16:07:33Z
	I0520 13:10:06.079857  616253 command_runner.go:130] > GoVersion:      go1.21.6
	I0520 13:10:06.079861  616253 command_runner.go:130] > Compiler:       gc
	I0520 13:10:06.079866  616253 command_runner.go:130] > Platform:       linux/amd64
	I0520 13:10:06.079869  616253 command_runner.go:130] > Linkmode:       dynamic
	I0520 13:10:06.079874  616253 command_runner.go:130] > BuildTags:      
	I0520 13:10:06.079877  616253 command_runner.go:130] >   containers_image_ostree_stub
	I0520 13:10:06.079882  616253 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0520 13:10:06.079885  616253 command_runner.go:130] >   btrfs_noversion
	I0520 13:10:06.079890  616253 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0520 13:10:06.079894  616253 command_runner.go:130] >   libdm_no_deferred_remove
	I0520 13:10:06.079897  616253 command_runner.go:130] >   seccomp
	I0520 13:10:06.079900  616253 command_runner.go:130] > LDFlags:          unknown
	I0520 13:10:06.079904  616253 command_runner.go:130] > SeccompEnabled:   true
	I0520 13:10:06.079908  616253 command_runner.go:130] > AppArmorEnabled:  false
	I0520 13:10:06.082986  616253 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 13:10:06.085358  616253 main.go:141] libmachine: (functional-694790) Calling .GetIP
	I0520 13:10:06.088257  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:10:06.088624  616253 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
	I0520 13:10:06.088655  616253 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
	I0520 13:10:06.088867  616253 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 13:10:06.092881  616253 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0520 13:10:06.093020  616253 kubeadm.go:877] updating cluster {Name:functional-694790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:functional-694790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 13:10:06.093188  616253 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:10:06.093270  616253 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:10:06.138840  616253 command_runner.go:130] > {
	I0520 13:10:06.138876  616253 command_runner.go:130] >   "images": [
	I0520 13:10:06.138882  616253 command_runner.go:130] >     {
	I0520 13:10:06.138895  616253 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0520 13:10:06.138903  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.138912  616253 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0520 13:10:06.138918  616253 command_runner.go:130] >       ],
	I0520 13:10:06.138924  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.138935  616253 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0520 13:10:06.138945  616253 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0520 13:10:06.138950  616253 command_runner.go:130] >       ],
	I0520 13:10:06.138958  616253 command_runner.go:130] >       "size": "65291810",
	I0520 13:10:06.138964  616253 command_runner.go:130] >       "uid": null,
	I0520 13:10:06.138970  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.138988  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.138996  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.139001  616253 command_runner.go:130] >     },
	I0520 13:10:06.139006  616253 command_runner.go:130] >     {
	I0520 13:10:06.139019  616253 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0520 13:10:06.139025  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.139034  616253 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0520 13:10:06.139041  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139048  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.139068  616253 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0520 13:10:06.139081  616253 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0520 13:10:06.139086  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139099  616253 command_runner.go:130] >       "size": "31470524",
	I0520 13:10:06.139105  616253 command_runner.go:130] >       "uid": null,
	I0520 13:10:06.139112  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.139118  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.139124  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.139129  616253 command_runner.go:130] >     },
	I0520 13:10:06.139134  616253 command_runner.go:130] >     {
	I0520 13:10:06.139143  616253 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0520 13:10:06.139152  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.139160  616253 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0520 13:10:06.139165  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139171  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.139182  616253 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0520 13:10:06.139193  616253 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0520 13:10:06.139197  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139204  616253 command_runner.go:130] >       "size": "61245718",
	I0520 13:10:06.139210  616253 command_runner.go:130] >       "uid": null,
	I0520 13:10:06.139216  616253 command_runner.go:130] >       "username": "nonroot",
	I0520 13:10:06.139222  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.139229  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.139235  616253 command_runner.go:130] >     },
	I0520 13:10:06.139250  616253 command_runner.go:130] >     {
	I0520 13:10:06.139259  616253 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0520 13:10:06.139265  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.139273  616253 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0520 13:10:06.139282  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139288  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.139298  616253 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0520 13:10:06.139311  616253 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0520 13:10:06.139318  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139322  616253 command_runner.go:130] >       "size": "150779692",
	I0520 13:10:06.139325  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.139329  616253 command_runner.go:130] >         "value": "0"
	I0520 13:10:06.139333  616253 command_runner.go:130] >       },
	I0520 13:10:06.139336  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.139343  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.139348  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.139352  616253 command_runner.go:130] >     },
	I0520 13:10:06.139355  616253 command_runner.go:130] >     {
	I0520 13:10:06.139360  616253 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0520 13:10:06.139365  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.139370  616253 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0520 13:10:06.139375  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139379  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.139402  616253 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0520 13:10:06.139412  616253 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0520 13:10:06.139415  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139419  616253 command_runner.go:130] >       "size": "117601759",
	I0520 13:10:06.139422  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.139426  616253 command_runner.go:130] >         "value": "0"
	I0520 13:10:06.139429  616253 command_runner.go:130] >       },
	I0520 13:10:06.139434  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.139438  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.139444  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.139447  616253 command_runner.go:130] >     },
	I0520 13:10:06.139451  616253 command_runner.go:130] >     {
	I0520 13:10:06.139457  616253 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0520 13:10:06.139478  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.139485  616253 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0520 13:10:06.139489  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139493  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.139500  616253 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0520 13:10:06.139509  616253 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0520 13:10:06.139512  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139517  616253 command_runner.go:130] >       "size": "112170310",
	I0520 13:10:06.139521  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.139525  616253 command_runner.go:130] >         "value": "0"
	I0520 13:10:06.139528  616253 command_runner.go:130] >       },
	I0520 13:10:06.139533  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.139539  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.139543  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.139546  616253 command_runner.go:130] >     },
	I0520 13:10:06.139550  616253 command_runner.go:130] >     {
	I0520 13:10:06.139555  616253 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0520 13:10:06.139562  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.139567  616253 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0520 13:10:06.139570  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139574  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.139581  616253 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0520 13:10:06.139588  616253 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0520 13:10:06.139593  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139600  616253 command_runner.go:130] >       "size": "85933465",
	I0520 13:10:06.139604  616253 command_runner.go:130] >       "uid": null,
	I0520 13:10:06.139607  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.139611  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.139615  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.139618  616253 command_runner.go:130] >     },
	I0520 13:10:06.139623  616253 command_runner.go:130] >     {
	I0520 13:10:06.139629  616253 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0520 13:10:06.139633  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.139640  616253 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0520 13:10:06.139644  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139648  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.139662  616253 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0520 13:10:06.139671  616253 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0520 13:10:06.139675  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139681  616253 command_runner.go:130] >       "size": "63026504",
	I0520 13:10:06.139685  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.139690  616253 command_runner.go:130] >         "value": "0"
	I0520 13:10:06.139693  616253 command_runner.go:130] >       },
	I0520 13:10:06.139697  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.139701  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.139705  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.139711  616253 command_runner.go:130] >     },
	I0520 13:10:06.139714  616253 command_runner.go:130] >     {
	I0520 13:10:06.139720  616253 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0520 13:10:06.139725  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.139729  616253 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0520 13:10:06.139733  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139736  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.139743  616253 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0520 13:10:06.139752  616253 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0520 13:10:06.139757  616253 command_runner.go:130] >       ],
	I0520 13:10:06.139764  616253 command_runner.go:130] >       "size": "750414",
	I0520 13:10:06.139767  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.139771  616253 command_runner.go:130] >         "value": "65535"
	I0520 13:10:06.139775  616253 command_runner.go:130] >       },
	I0520 13:10:06.139779  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.139783  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.139787  616253 command_runner.go:130] >       "pinned": true
	I0520 13:10:06.139789  616253 command_runner.go:130] >     }
	I0520 13:10:06.139792  616253 command_runner.go:130] >   ]
	I0520 13:10:06.139795  616253 command_runner.go:130] > }
	I0520 13:10:06.140007  616253 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:10:06.140022  616253 crio.go:433] Images already preloaded, skipping extraction
	I0520 13:10:06.140087  616253 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:10:06.172124  616253 command_runner.go:130] > {
	I0520 13:10:06.172159  616253 command_runner.go:130] >   "images": [
	I0520 13:10:06.172165  616253 command_runner.go:130] >     {
	I0520 13:10:06.172181  616253 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0520 13:10:06.172189  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.172198  616253 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0520 13:10:06.172204  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172210  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.172224  616253 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0520 13:10:06.172236  616253 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0520 13:10:06.172241  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172249  616253 command_runner.go:130] >       "size": "65291810",
	I0520 13:10:06.172256  616253 command_runner.go:130] >       "uid": null,
	I0520 13:10:06.172263  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.172276  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.172283  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.172291  616253 command_runner.go:130] >     },
	I0520 13:10:06.172296  616253 command_runner.go:130] >     {
	I0520 13:10:06.172305  616253 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0520 13:10:06.172314  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.172322  616253 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0520 13:10:06.172328  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172334  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.172345  616253 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0520 13:10:06.172358  616253 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0520 13:10:06.172363  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172374  616253 command_runner.go:130] >       "size": "31470524",
	I0520 13:10:06.172380  616253 command_runner.go:130] >       "uid": null,
	I0520 13:10:06.172385  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.172391  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.172399  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.172404  616253 command_runner.go:130] >     },
	I0520 13:10:06.172409  616253 command_runner.go:130] >     {
	I0520 13:10:06.172418  616253 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0520 13:10:06.172425  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.172432  616253 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0520 13:10:06.172438  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172449  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.172462  616253 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0520 13:10:06.172473  616253 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0520 13:10:06.172484  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172490  616253 command_runner.go:130] >       "size": "61245718",
	I0520 13:10:06.172500  616253 command_runner.go:130] >       "uid": null,
	I0520 13:10:06.172507  616253 command_runner.go:130] >       "username": "nonroot",
	I0520 13:10:06.172515  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.172521  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.172529  616253 command_runner.go:130] >     },
	I0520 13:10:06.172535  616253 command_runner.go:130] >     {
	I0520 13:10:06.172544  616253 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0520 13:10:06.172552  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.172560  616253 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0520 13:10:06.172569  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172575  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.172587  616253 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0520 13:10:06.172604  616253 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0520 13:10:06.172612  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172619  616253 command_runner.go:130] >       "size": "150779692",
	I0520 13:10:06.172628  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.172635  616253 command_runner.go:130] >         "value": "0"
	I0520 13:10:06.172644  616253 command_runner.go:130] >       },
	I0520 13:10:06.172655  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.172664  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.172670  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.172675  616253 command_runner.go:130] >     },
	I0520 13:10:06.172682  616253 command_runner.go:130] >     {
	I0520 13:10:06.172692  616253 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0520 13:10:06.172701  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.172710  616253 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0520 13:10:06.172720  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172726  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.172740  616253 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0520 13:10:06.172754  616253 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0520 13:10:06.172761  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172769  616253 command_runner.go:130] >       "size": "117601759",
	I0520 13:10:06.172777  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.172783  616253 command_runner.go:130] >         "value": "0"
	I0520 13:10:06.172791  616253 command_runner.go:130] >       },
	I0520 13:10:06.172797  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.172807  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.172813  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.172821  616253 command_runner.go:130] >     },
	I0520 13:10:06.172826  616253 command_runner.go:130] >     {
	I0520 13:10:06.172847  616253 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0520 13:10:06.172855  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.172863  616253 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0520 13:10:06.172872  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172879  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.172895  616253 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0520 13:10:06.172909  616253 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0520 13:10:06.172915  616253 command_runner.go:130] >       ],
	I0520 13:10:06.172923  616253 command_runner.go:130] >       "size": "112170310",
	I0520 13:10:06.172927  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.172933  616253 command_runner.go:130] >         "value": "0"
	I0520 13:10:06.172938  616253 command_runner.go:130] >       },
	I0520 13:10:06.172947  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.172953  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.172960  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.172965  616253 command_runner.go:130] >     },
	I0520 13:10:06.172970  616253 command_runner.go:130] >     {
	I0520 13:10:06.172979  616253 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0520 13:10:06.172993  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.173003  616253 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0520 13:10:06.173009  616253 command_runner.go:130] >       ],
	I0520 13:10:06.173018  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.173029  616253 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0520 13:10:06.173043  616253 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0520 13:10:06.173052  616253 command_runner.go:130] >       ],
	I0520 13:10:06.173059  616253 command_runner.go:130] >       "size": "85933465",
	I0520 13:10:06.173068  616253 command_runner.go:130] >       "uid": null,
	I0520 13:10:06.173077  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.173087  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.173093  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.173101  616253 command_runner.go:130] >     },
	I0520 13:10:06.173106  616253 command_runner.go:130] >     {
	I0520 13:10:06.173122  616253 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0520 13:10:06.173132  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.173139  616253 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0520 13:10:06.173148  616253 command_runner.go:130] >       ],
	I0520 13:10:06.173153  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.173217  616253 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0520 13:10:06.173235  616253 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0520 13:10:06.173264  616253 command_runner.go:130] >       ],
	I0520 13:10:06.173272  616253 command_runner.go:130] >       "size": "63026504",
	I0520 13:10:06.173278  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.173283  616253 command_runner.go:130] >         "value": "0"
	I0520 13:10:06.173288  616253 command_runner.go:130] >       },
	I0520 13:10:06.173294  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.173300  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.173305  616253 command_runner.go:130] >       "pinned": false
	I0520 13:10:06.173311  616253 command_runner.go:130] >     },
	I0520 13:10:06.173316  616253 command_runner.go:130] >     {
	I0520 13:10:06.173325  616253 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0520 13:10:06.173331  616253 command_runner.go:130] >       "repoTags": [
	I0520 13:10:06.173338  616253 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0520 13:10:06.173344  616253 command_runner.go:130] >       ],
	I0520 13:10:06.173351  616253 command_runner.go:130] >       "repoDigests": [
	I0520 13:10:06.173362  616253 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0520 13:10:06.173373  616253 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0520 13:10:06.173378  616253 command_runner.go:130] >       ],
	I0520 13:10:06.173385  616253 command_runner.go:130] >       "size": "750414",
	I0520 13:10:06.173390  616253 command_runner.go:130] >       "uid": {
	I0520 13:10:06.173396  616253 command_runner.go:130] >         "value": "65535"
	I0520 13:10:06.173403  616253 command_runner.go:130] >       },
	I0520 13:10:06.173413  616253 command_runner.go:130] >       "username": "",
	I0520 13:10:06.173419  616253 command_runner.go:130] >       "spec": null,
	I0520 13:10:06.173428  616253 command_runner.go:130] >       "pinned": true
	I0520 13:10:06.173436  616253 command_runner.go:130] >     }
	I0520 13:10:06.173442  616253 command_runner.go:130] >   ]
	I0520 13:10:06.173450  616253 command_runner.go:130] > }
	I0520 13:10:06.173632  616253 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:10:06.173655  616253 cache_images.go:84] Images are preloaded, skipping loading
	I0520 13:10:06.173665  616253 kubeadm.go:928] updating node { 192.168.39.165 8441 v1.30.1 crio true true} ...
	I0520 13:10:06.173780  616253 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-694790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:functional-694790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:10:06.173845  616253 ssh_runner.go:195] Run: crio config
	I0520 13:10:06.213463  616253 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0520 13:10:06.213495  616253 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0520 13:10:06.213502  616253 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0520 13:10:06.213507  616253 command_runner.go:130] > #
	I0520 13:10:06.213519  616253 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0520 13:10:06.213529  616253 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0520 13:10:06.213539  616253 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0520 13:10:06.213547  616253 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0520 13:10:06.213551  616253 command_runner.go:130] > # reload'.
	I0520 13:10:06.213556  616253 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0520 13:10:06.213562  616253 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0520 13:10:06.213568  616253 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0520 13:10:06.213578  616253 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0520 13:10:06.213583  616253 command_runner.go:130] > [crio]
	I0520 13:10:06.213591  616253 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0520 13:10:06.213599  616253 command_runner.go:130] > # containers images, in this directory.
	I0520 13:10:06.213608  616253 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0520 13:10:06.213631  616253 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0520 13:10:06.213644  616253 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0520 13:10:06.213651  616253 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0520 13:10:06.213656  616253 command_runner.go:130] > # imagestore = ""
	I0520 13:10:06.213661  616253 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0520 13:10:06.213667  616253 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0520 13:10:06.213676  616253 command_runner.go:130] > storage_driver = "overlay"
	I0520 13:10:06.213685  616253 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0520 13:10:06.213697  616253 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0520 13:10:06.213707  616253 command_runner.go:130] > storage_option = [
	I0520 13:10:06.213823  616253 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0520 13:10:06.213845  616253 command_runner.go:130] > ]
	I0520 13:10:06.213855  616253 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0520 13:10:06.213865  616253 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0520 13:10:06.213872  616253 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0520 13:10:06.213885  616253 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0520 13:10:06.213896  616253 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0520 13:10:06.213903  616253 command_runner.go:130] > # always happen on a node reboot
	I0520 13:10:06.213912  616253 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0520 13:10:06.213930  616253 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0520 13:10:06.213944  616253 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0520 13:10:06.213952  616253 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0520 13:10:06.213961  616253 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0520 13:10:06.213974  616253 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0520 13:10:06.213991  616253 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0520 13:10:06.213999  616253 command_runner.go:130] > # internal_wipe = true
	I0520 13:10:06.214012  616253 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0520 13:10:06.214022  616253 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0520 13:10:06.214029  616253 command_runner.go:130] > # internal_repair = false
	I0520 13:10:06.214037  616253 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0520 13:10:06.214046  616253 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0520 13:10:06.214061  616253 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0520 13:10:06.214074  616253 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0520 13:10:06.214086  616253 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0520 13:10:06.214092  616253 command_runner.go:130] > [crio.api]
	I0520 13:10:06.214103  616253 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0520 13:10:06.214113  616253 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0520 13:10:06.214123  616253 command_runner.go:130] > # IP address on which the stream server will listen.
	I0520 13:10:06.214133  616253 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0520 13:10:06.214144  616253 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0520 13:10:06.214156  616253 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0520 13:10:06.214163  616253 command_runner.go:130] > # stream_port = "0"
	I0520 13:10:06.214172  616253 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0520 13:10:06.214182  616253 command_runner.go:130] > # stream_enable_tls = false
	I0520 13:10:06.214195  616253 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0520 13:10:06.214210  616253 command_runner.go:130] > # stream_idle_timeout = ""
	I0520 13:10:06.214221  616253 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0520 13:10:06.214233  616253 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0520 13:10:06.214239  616253 command_runner.go:130] > # minutes.
	I0520 13:10:06.214248  616253 command_runner.go:130] > # stream_tls_cert = ""
	I0520 13:10:06.214262  616253 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0520 13:10:06.214272  616253 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0520 13:10:06.214282  616253 command_runner.go:130] > # stream_tls_key = ""
	I0520 13:10:06.214292  616253 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0520 13:10:06.214305  616253 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0520 13:10:06.214326  616253 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0520 13:10:06.214336  616253 command_runner.go:130] > # stream_tls_ca = ""
	I0520 13:10:06.214347  616253 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0520 13:10:06.214358  616253 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0520 13:10:06.214369  616253 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0520 13:10:06.214381  616253 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0520 13:10:06.214390  616253 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0520 13:10:06.214403  616253 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0520 13:10:06.214410  616253 command_runner.go:130] > [crio.runtime]
	I0520 13:10:06.214419  616253 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0520 13:10:06.214431  616253 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0520 13:10:06.214441  616253 command_runner.go:130] > # "nofile=1024:2048"
	I0520 13:10:06.214453  616253 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0520 13:10:06.214464  616253 command_runner.go:130] > # default_ulimits = [
	I0520 13:10:06.214469  616253 command_runner.go:130] > # ]
	I0520 13:10:06.214481  616253 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0520 13:10:06.214493  616253 command_runner.go:130] > # no_pivot = false
	I0520 13:10:06.214504  616253 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0520 13:10:06.214518  616253 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0520 13:10:06.214528  616253 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0520 13:10:06.214538  616253 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0520 13:10:06.214549  616253 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0520 13:10:06.214565  616253 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0520 13:10:06.214577  616253 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0520 13:10:06.214588  616253 command_runner.go:130] > # Cgroup setting for conmon
	I0520 13:10:06.214602  616253 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0520 13:10:06.214612  616253 command_runner.go:130] > conmon_cgroup = "pod"
	I0520 13:10:06.214623  616253 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0520 13:10:06.214634  616253 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0520 13:10:06.214648  616253 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0520 13:10:06.214658  616253 command_runner.go:130] > conmon_env = [
	I0520 13:10:06.214668  616253 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0520 13:10:06.214676  616253 command_runner.go:130] > ]
	I0520 13:10:06.214685  616253 command_runner.go:130] > # Additional environment variables to set for all the
	I0520 13:10:06.214695  616253 command_runner.go:130] > # containers. These are overridden if set in the
	I0520 13:10:06.214707  616253 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0520 13:10:06.214716  616253 command_runner.go:130] > # default_env = [
	I0520 13:10:06.214721  616253 command_runner.go:130] > # ]
	I0520 13:10:06.214732  616253 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0520 13:10:06.214743  616253 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0520 13:10:06.214752  616253 command_runner.go:130] > # selinux = false
	I0520 13:10:06.214762  616253 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0520 13:10:06.214775  616253 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0520 13:10:06.214787  616253 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0520 13:10:06.214796  616253 command_runner.go:130] > # seccomp_profile = ""
	I0520 13:10:06.214804  616253 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0520 13:10:06.214816  616253 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0520 13:10:06.214830  616253 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0520 13:10:06.214841  616253 command_runner.go:130] > # which might increase security.
	I0520 13:10:06.214849  616253 command_runner.go:130] > # This option is currently deprecated,
	I0520 13:10:06.214862  616253 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0520 13:10:06.214874  616253 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0520 13:10:06.214888  616253 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0520 13:10:06.214899  616253 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0520 13:10:06.214912  616253 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0520 13:10:06.214925  616253 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0520 13:10:06.214936  616253 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:10:06.214947  616253 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0520 13:10:06.214956  616253 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0520 13:10:06.214967  616253 command_runner.go:130] > # the cgroup blockio controller.
	I0520 13:10:06.214973  616253 command_runner.go:130] > # blockio_config_file = ""
	I0520 13:10:06.214990  616253 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0520 13:10:06.214999  616253 command_runner.go:130] > # blockio parameters.
	I0520 13:10:06.215006  616253 command_runner.go:130] > # blockio_reload = false
	I0520 13:10:06.215018  616253 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0520 13:10:06.215031  616253 command_runner.go:130] > # irqbalance daemon.
	I0520 13:10:06.215043  616253 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0520 13:10:06.215056  616253 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0520 13:10:06.215070  616253 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0520 13:10:06.215081  616253 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0520 13:10:06.215093  616253 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0520 13:10:06.215105  616253 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0520 13:10:06.215117  616253 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:10:06.215128  616253 command_runner.go:130] > # rdt_config_file = ""
	I0520 13:10:06.215136  616253 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0520 13:10:06.215149  616253 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0520 13:10:06.215176  616253 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0520 13:10:06.215186  616253 command_runner.go:130] > # separate_pull_cgroup = ""
	I0520 13:10:06.215201  616253 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0520 13:10:06.215214  616253 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0520 13:10:06.215224  616253 command_runner.go:130] > # will be added.
	I0520 13:10:06.215232  616253 command_runner.go:130] > # default_capabilities = [
	I0520 13:10:06.215238  616253 command_runner.go:130] > # 	"CHOWN",
	I0520 13:10:06.215243  616253 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0520 13:10:06.215252  616253 command_runner.go:130] > # 	"FSETID",
	I0520 13:10:06.215258  616253 command_runner.go:130] > # 	"FOWNER",
	I0520 13:10:06.215267  616253 command_runner.go:130] > # 	"SETGID",
	I0520 13:10:06.215273  616253 command_runner.go:130] > # 	"SETUID",
	I0520 13:10:06.215282  616253 command_runner.go:130] > # 	"SETPCAP",
	I0520 13:10:06.215291  616253 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0520 13:10:06.215300  616253 command_runner.go:130] > # 	"KILL",
	I0520 13:10:06.215308  616253 command_runner.go:130] > # ]
	I0520 13:10:06.215320  616253 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0520 13:10:06.215335  616253 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0520 13:10:06.215344  616253 command_runner.go:130] > # add_inheritable_capabilities = false
	I0520 13:10:06.215354  616253 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0520 13:10:06.215370  616253 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0520 13:10:06.215383  616253 command_runner.go:130] > default_sysctls = [
	I0520 13:10:06.215391  616253 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0520 13:10:06.215399  616253 command_runner.go:130] > ]
	I0520 13:10:06.215408  616253 command_runner.go:130] > # List of devices on the host that a
	I0520 13:10:06.215422  616253 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0520 13:10:06.215431  616253 command_runner.go:130] > # allowed_devices = [
	I0520 13:10:06.215438  616253 command_runner.go:130] > # 	"/dev/fuse",
	I0520 13:10:06.215447  616253 command_runner.go:130] > # ]
	I0520 13:10:06.215456  616253 command_runner.go:130] > # List of additional devices. specified as
	I0520 13:10:06.215469  616253 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0520 13:10:06.215482  616253 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0520 13:10:06.215494  616253 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0520 13:10:06.215505  616253 command_runner.go:130] > # additional_devices = [
	I0520 13:10:06.215511  616253 command_runner.go:130] > # ]
	I0520 13:10:06.215520  616253 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0520 13:10:06.215530  616253 command_runner.go:130] > # cdi_spec_dirs = [
	I0520 13:10:06.215536  616253 command_runner.go:130] > # 	"/etc/cdi",
	I0520 13:10:06.215541  616253 command_runner.go:130] > # 	"/var/run/cdi",
	I0520 13:10:06.215549  616253 command_runner.go:130] > # ]
	I0520 13:10:06.215559  616253 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0520 13:10:06.215578  616253 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0520 13:10:06.215588  616253 command_runner.go:130] > # Defaults to false.
	I0520 13:10:06.215597  616253 command_runner.go:130] > # device_ownership_from_security_context = false
	I0520 13:10:06.215610  616253 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0520 13:10:06.215621  616253 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0520 13:10:06.215630  616253 command_runner.go:130] > # hooks_dir = [
	I0520 13:10:06.215638  616253 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0520 13:10:06.215646  616253 command_runner.go:130] > # ]
	I0520 13:10:06.215654  616253 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0520 13:10:06.215668  616253 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0520 13:10:06.215681  616253 command_runner.go:130] > # its default mounts from the following two files:
	I0520 13:10:06.215687  616253 command_runner.go:130] > #
	I0520 13:10:06.215697  616253 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0520 13:10:06.215710  616253 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0520 13:10:06.215718  616253 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0520 13:10:06.215726  616253 command_runner.go:130] > #
	I0520 13:10:06.215738  616253 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0520 13:10:06.215751  616253 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0520 13:10:06.215766  616253 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0520 13:10:06.215777  616253 command_runner.go:130] > #      only add mounts it finds in this file.
	I0520 13:10:06.215786  616253 command_runner.go:130] > #
	I0520 13:10:06.215795  616253 command_runner.go:130] > # default_mounts_file = ""
	I0520 13:10:06.215806  616253 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0520 13:10:06.215817  616253 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0520 13:10:06.215826  616253 command_runner.go:130] > pids_limit = 1024
	I0520 13:10:06.215837  616253 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0520 13:10:06.215851  616253 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0520 13:10:06.215864  616253 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0520 13:10:06.215881  616253 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0520 13:10:06.215891  616253 command_runner.go:130] > # log_size_max = -1
	I0520 13:10:06.215902  616253 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0520 13:10:06.215912  616253 command_runner.go:130] > # log_to_journald = false
	I0520 13:10:06.215922  616253 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0520 13:10:06.215934  616253 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0520 13:10:06.215946  616253 command_runner.go:130] > # Path to directory for container attach sockets.
	I0520 13:10:06.215958  616253 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0520 13:10:06.215969  616253 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0520 13:10:06.215980  616253 command_runner.go:130] > # bind_mount_prefix = ""
	I0520 13:10:06.215990  616253 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0520 13:10:06.215999  616253 command_runner.go:130] > # read_only = false
	I0520 13:10:06.216008  616253 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0520 13:10:06.216020  616253 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0520 13:10:06.216028  616253 command_runner.go:130] > # live configuration reload.
	I0520 13:10:06.216038  616253 command_runner.go:130] > # log_level = "info"
	I0520 13:10:06.216047  616253 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0520 13:10:06.216059  616253 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:10:06.216068  616253 command_runner.go:130] > # log_filter = ""
	I0520 13:10:06.216077  616253 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0520 13:10:06.216091  616253 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0520 13:10:06.216102  616253 command_runner.go:130] > # separated by comma.
	I0520 13:10:06.216115  616253 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 13:10:06.216122  616253 command_runner.go:130] > # uid_mappings = ""
	I0520 13:10:06.216134  616253 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0520 13:10:06.216147  616253 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0520 13:10:06.216157  616253 command_runner.go:130] > # separated by comma.
	I0520 13:10:06.216170  616253 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 13:10:06.216180  616253 command_runner.go:130] > # gid_mappings = ""
	I0520 13:10:06.216191  616253 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0520 13:10:06.216211  616253 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0520 13:10:06.216225  616253 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0520 13:10:06.216241  616253 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 13:10:06.216249  616253 command_runner.go:130] > # minimum_mappable_uid = -1
	I0520 13:10:06.216260  616253 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0520 13:10:06.216273  616253 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0520 13:10:06.216287  616253 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0520 13:10:06.216303  616253 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 13:10:06.216313  616253 command_runner.go:130] > # minimum_mappable_gid = -1
	I0520 13:10:06.216323  616253 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0520 13:10:06.216336  616253 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0520 13:10:06.216348  616253 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0520 13:10:06.216358  616253 command_runner.go:130] > # ctr_stop_timeout = 30
	I0520 13:10:06.216366  616253 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0520 13:10:06.216378  616253 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0520 13:10:06.216388  616253 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0520 13:10:06.216395  616253 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0520 13:10:06.216405  616253 command_runner.go:130] > drop_infra_ctr = false
	I0520 13:10:06.216415  616253 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0520 13:10:06.216427  616253 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0520 13:10:06.216442  616253 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0520 13:10:06.216452  616253 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0520 13:10:06.216463  616253 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0520 13:10:06.216476  616253 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0520 13:10:06.216486  616253 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0520 13:10:06.216499  616253 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0520 13:10:06.216506  616253 command_runner.go:130] > # shared_cpuset = ""
	I0520 13:10:06.216519  616253 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0520 13:10:06.216531  616253 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0520 13:10:06.216542  616253 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0520 13:10:06.216555  616253 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0520 13:10:06.216566  616253 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0520 13:10:06.216575  616253 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0520 13:10:06.216588  616253 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0520 13:10:06.216600  616253 command_runner.go:130] > # enable_criu_support = false
	I0520 13:10:06.216610  616253 command_runner.go:130] > # Enable/disable the generation of the container,
	I0520 13:10:06.216623  616253 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0520 13:10:06.216634  616253 command_runner.go:130] > # enable_pod_events = false
	I0520 13:10:06.216648  616253 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0520 13:10:06.216661  616253 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0520 13:10:06.216672  616253 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0520 13:10:06.216680  616253 command_runner.go:130] > # default_runtime = "runc"
	I0520 13:10:06.216691  616253 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0520 13:10:06.216703  616253 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0520 13:10:06.216720  616253 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0520 13:10:06.216732  616253 command_runner.go:130] > # creation as a file is not desired either.
	I0520 13:10:06.216746  616253 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0520 13:10:06.216756  616253 command_runner.go:130] > # the hostname is being managed dynamically.
	I0520 13:10:06.216766  616253 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0520 13:10:06.216772  616253 command_runner.go:130] > # ]
	I0520 13:10:06.216785  616253 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0520 13:10:06.216802  616253 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0520 13:10:06.216815  616253 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0520 13:10:06.216826  616253 command_runner.go:130] > # Each entry in the table should follow the format:
	I0520 13:10:06.216833  616253 command_runner.go:130] > #
	I0520 13:10:06.216841  616253 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0520 13:10:06.216851  616253 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0520 13:10:06.216879  616253 command_runner.go:130] > # runtime_type = "oci"
	I0520 13:10:06.216891  616253 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0520 13:10:06.216898  616253 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0520 13:10:06.216905  616253 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0520 13:10:06.216915  616253 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0520 13:10:06.216921  616253 command_runner.go:130] > # monitor_env = []
	I0520 13:10:06.216932  616253 command_runner.go:130] > # privileged_without_host_devices = false
	I0520 13:10:06.216940  616253 command_runner.go:130] > # allowed_annotations = []
	I0520 13:10:06.216952  616253 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0520 13:10:06.216962  616253 command_runner.go:130] > # Where:
	I0520 13:10:06.216970  616253 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0520 13:10:06.216979  616253 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0520 13:10:06.216992  616253 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0520 13:10:06.217004  616253 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0520 13:10:06.217013  616253 command_runner.go:130] > #   in $PATH.
	I0520 13:10:06.217022  616253 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0520 13:10:06.217033  616253 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0520 13:10:06.217045  616253 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0520 13:10:06.217054  616253 command_runner.go:130] > #   state.
	I0520 13:10:06.217064  616253 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0520 13:10:06.217077  616253 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0520 13:10:06.217089  616253 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0520 13:10:06.217102  616253 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0520 13:10:06.217115  616253 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0520 13:10:06.217128  616253 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0520 13:10:06.217140  616253 command_runner.go:130] > #   The currently recognized values are:
	I0520 13:10:06.217153  616253 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0520 13:10:06.217168  616253 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0520 13:10:06.217181  616253 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0520 13:10:06.217191  616253 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0520 13:10:06.217211  616253 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0520 13:10:06.217225  616253 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0520 13:10:06.217239  616253 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0520 13:10:06.217271  616253 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0520 13:10:06.217285  616253 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0520 13:10:06.217296  616253 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0520 13:10:06.217307  616253 command_runner.go:130] > #   deprecated option "conmon".
	I0520 13:10:06.217321  616253 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0520 13:10:06.217332  616253 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0520 13:10:06.217346  616253 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0520 13:10:06.217358  616253 command_runner.go:130] > #   should be moved to the container's cgroup
	I0520 13:10:06.217371  616253 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0520 13:10:06.217381  616253 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0520 13:10:06.217393  616253 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0520 13:10:06.217404  616253 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0520 13:10:06.217413  616253 command_runner.go:130] > #
	I0520 13:10:06.217423  616253 command_runner.go:130] > # Using the seccomp notifier feature:
	I0520 13:10:06.217427  616253 command_runner.go:130] > #
	I0520 13:10:06.217439  616253 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0520 13:10:06.217453  616253 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0520 13:10:06.217461  616253 command_runner.go:130] > #
	I0520 13:10:06.217472  616253 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0520 13:10:06.217485  616253 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0520 13:10:06.217493  616253 command_runner.go:130] > #
	I0520 13:10:06.217501  616253 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0520 13:10:06.217509  616253 command_runner.go:130] > # feature.
	I0520 13:10:06.217513  616253 command_runner.go:130] > #
	I0520 13:10:06.217525  616253 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0520 13:10:06.217539  616253 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0520 13:10:06.217551  616253 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0520 13:10:06.217564  616253 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0520 13:10:06.217576  616253 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0520 13:10:06.217585  616253 command_runner.go:130] > #
	I0520 13:10:06.217598  616253 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0520 13:10:06.217610  616253 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0520 13:10:06.217618  616253 command_runner.go:130] > #
	I0520 13:10:06.217629  616253 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0520 13:10:06.217643  616253 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0520 13:10:06.217651  616253 command_runner.go:130] > #
	I0520 13:10:06.217661  616253 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0520 13:10:06.217673  616253 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0520 13:10:06.217682  616253 command_runner.go:130] > # limitation.
	I0520 13:10:06.217689  616253 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0520 13:10:06.217699  616253 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0520 13:10:06.217706  616253 command_runner.go:130] > runtime_type = "oci"
	I0520 13:10:06.217715  616253 command_runner.go:130] > runtime_root = "/run/runc"
	I0520 13:10:06.217721  616253 command_runner.go:130] > runtime_config_path = ""
	I0520 13:10:06.217730  616253 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0520 13:10:06.217736  616253 command_runner.go:130] > monitor_cgroup = "pod"
	I0520 13:10:06.217746  616253 command_runner.go:130] > monitor_exec_cgroup = ""
	I0520 13:10:06.217754  616253 command_runner.go:130] > monitor_env = [
	I0520 13:10:06.217767  616253 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0520 13:10:06.217774  616253 command_runner.go:130] > ]
	I0520 13:10:06.217781  616253 command_runner.go:130] > privileged_without_host_devices = false
	I0520 13:10:06.217795  616253 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0520 13:10:06.217805  616253 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0520 13:10:06.217817  616253 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0520 13:10:06.217831  616253 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0520 13:10:06.217845  616253 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0520 13:10:06.217858  616253 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0520 13:10:06.217875  616253 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0520 13:10:06.217889  616253 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0520 13:10:06.217900  616253 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0520 13:10:06.217913  616253 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0520 13:10:06.217922  616253 command_runner.go:130] > # Example:
	I0520 13:10:06.217933  616253 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0520 13:10:06.217944  616253 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0520 13:10:06.217954  616253 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0520 13:10:06.217963  616253 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0520 13:10:06.217971  616253 command_runner.go:130] > # cpuset = 0
	I0520 13:10:06.217981  616253 command_runner.go:130] > # cpushares = "0-1"
	I0520 13:10:06.217990  616253 command_runner.go:130] > # Where:
	I0520 13:10:06.218000  616253 command_runner.go:130] > # The workload name is workload-type.
	I0520 13:10:06.218012  616253 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0520 13:10:06.218023  616253 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0520 13:10:06.218031  616253 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0520 13:10:06.218048  616253 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0520 13:10:06.218061  616253 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0520 13:10:06.218071  616253 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0520 13:10:06.218081  616253 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0520 13:10:06.218090  616253 command_runner.go:130] > # Default value is set to true
	I0520 13:10:06.218100  616253 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0520 13:10:06.218111  616253 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0520 13:10:06.218121  616253 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0520 13:10:06.218132  616253 command_runner.go:130] > # Default value is set to 'false'
	I0520 13:10:06.218140  616253 command_runner.go:130] > # disable_hostport_mapping = false
	I0520 13:10:06.218153  616253 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0520 13:10:06.218163  616253 command_runner.go:130] > #
	I0520 13:10:06.218174  616253 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0520 13:10:06.218187  616253 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0520 13:10:06.218203  616253 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0520 13:10:06.218216  616253 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0520 13:10:06.218228  616253 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0520 13:10:06.218237  616253 command_runner.go:130] > [crio.image]
	I0520 13:10:06.218247  616253 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0520 13:10:06.218257  616253 command_runner.go:130] > # default_transport = "docker://"
	I0520 13:10:06.218268  616253 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0520 13:10:06.218280  616253 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0520 13:10:06.218289  616253 command_runner.go:130] > # global_auth_file = ""
	I0520 13:10:06.218299  616253 command_runner.go:130] > # The image used to instantiate infra containers.
	I0520 13:10:06.218311  616253 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:10:06.218322  616253 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0520 13:10:06.218336  616253 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0520 13:10:06.218348  616253 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0520 13:10:06.218358  616253 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:10:06.218367  616253 command_runner.go:130] > # pause_image_auth_file = ""
	I0520 13:10:06.218375  616253 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0520 13:10:06.218387  616253 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0520 13:10:06.218400  616253 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0520 13:10:06.218410  616253 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0520 13:10:06.218419  616253 command_runner.go:130] > # pause_command = "/pause"
	I0520 13:10:06.218430  616253 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0520 13:10:06.218440  616253 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0520 13:10:06.218452  616253 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0520 13:10:06.218464  616253 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0520 13:10:06.218476  616253 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0520 13:10:06.218486  616253 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0520 13:10:06.218496  616253 command_runner.go:130] > # pinned_images = [
	I0520 13:10:06.218501  616253 command_runner.go:130] > # ]
	I0520 13:10:06.218514  616253 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0520 13:10:06.218526  616253 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0520 13:10:06.218539  616253 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0520 13:10:06.218551  616253 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0520 13:10:06.218565  616253 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0520 13:10:06.218574  616253 command_runner.go:130] > # signature_policy = ""
	I0520 13:10:06.218584  616253 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0520 13:10:06.218596  616253 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0520 13:10:06.218612  616253 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0520 13:10:06.218619  616253 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0520 13:10:06.218629  616253 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0520 13:10:06.218635  616253 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0520 13:10:06.218643  616253 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0520 13:10:06.218652  616253 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0520 13:10:06.218657  616253 command_runner.go:130] > # changing them here.
	I0520 13:10:06.218663  616253 command_runner.go:130] > # insecure_registries = [
	I0520 13:10:06.218669  616253 command_runner.go:130] > # ]
	I0520 13:10:06.218677  616253 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0520 13:10:06.218685  616253 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0520 13:10:06.218691  616253 command_runner.go:130] > # image_volumes = "mkdir"
	I0520 13:10:06.218698  616253 command_runner.go:130] > # Temporary directory to use for storing big files
	I0520 13:10:06.218705  616253 command_runner.go:130] > # big_files_temporary_dir = ""
	I0520 13:10:06.218713  616253 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0520 13:10:06.218719  616253 command_runner.go:130] > # CNI plugins.
	I0520 13:10:06.218725  616253 command_runner.go:130] > [crio.network]
	I0520 13:10:06.218734  616253 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0520 13:10:06.218743  616253 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0520 13:10:06.218750  616253 command_runner.go:130] > # cni_default_network = ""
	I0520 13:10:06.218757  616253 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0520 13:10:06.218763  616253 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0520 13:10:06.218771  616253 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0520 13:10:06.218776  616253 command_runner.go:130] > # plugin_dirs = [
	I0520 13:10:06.218782  616253 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0520 13:10:06.218787  616253 command_runner.go:130] > # ]
	I0520 13:10:06.218795  616253 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0520 13:10:06.218799  616253 command_runner.go:130] > [crio.metrics]
	I0520 13:10:06.218806  616253 command_runner.go:130] > # Globally enable or disable metrics support.
	I0520 13:10:06.218811  616253 command_runner.go:130] > enable_metrics = true
	I0520 13:10:06.218818  616253 command_runner.go:130] > # Specify enabled metrics collectors.
	I0520 13:10:06.218830  616253 command_runner.go:130] > # Per default all metrics are enabled.
	I0520 13:10:06.218846  616253 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0520 13:10:06.218860  616253 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0520 13:10:06.218870  616253 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0520 13:10:06.218879  616253 command_runner.go:130] > # metrics_collectors = [
	I0520 13:10:06.218885  616253 command_runner.go:130] > # 	"operations",
	I0520 13:10:06.218896  616253 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0520 13:10:06.218904  616253 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0520 13:10:06.218915  616253 command_runner.go:130] > # 	"operations_errors",
	I0520 13:10:06.218926  616253 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0520 13:10:06.218936  616253 command_runner.go:130] > # 	"image_pulls_by_name",
	I0520 13:10:06.218944  616253 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0520 13:10:06.218953  616253 command_runner.go:130] > # 	"image_pulls_failures",
	I0520 13:10:06.218960  616253 command_runner.go:130] > # 	"image_pulls_successes",
	I0520 13:10:06.218970  616253 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0520 13:10:06.218978  616253 command_runner.go:130] > # 	"image_layer_reuse",
	I0520 13:10:06.218988  616253 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0520 13:10:06.218997  616253 command_runner.go:130] > # 	"containers_oom_total",
	I0520 13:10:06.219020  616253 command_runner.go:130] > # 	"containers_oom",
	I0520 13:10:06.219027  616253 command_runner.go:130] > # 	"processes_defunct",
	I0520 13:10:06.219035  616253 command_runner.go:130] > # 	"operations_total",
	I0520 13:10:06.219044  616253 command_runner.go:130] > # 	"operations_latency_seconds",
	I0520 13:10:06.219051  616253 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0520 13:10:06.219061  616253 command_runner.go:130] > # 	"operations_errors_total",
	I0520 13:10:06.219067  616253 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0520 13:10:06.219076  616253 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0520 13:10:06.219082  616253 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0520 13:10:06.219091  616253 command_runner.go:130] > # 	"image_pulls_success_total",
	I0520 13:10:06.219098  616253 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0520 13:10:06.219104  616253 command_runner.go:130] > # 	"containers_oom_count_total",
	I0520 13:10:06.219114  616253 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0520 13:10:06.219121  616253 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0520 13:10:06.219128  616253 command_runner.go:130] > # ]
	I0520 13:10:06.219139  616253 command_runner.go:130] > # The port on which the metrics server will listen.
	I0520 13:10:06.219148  616253 command_runner.go:130] > # metrics_port = 9090
	I0520 13:10:06.219155  616253 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0520 13:10:06.219164  616253 command_runner.go:130] > # metrics_socket = ""
	I0520 13:10:06.219174  616253 command_runner.go:130] > # The certificate for the secure metrics server.
	I0520 13:10:06.219187  616253 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0520 13:10:06.219203  616253 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0520 13:10:06.219216  616253 command_runner.go:130] > # certificate on any modification event.
	I0520 13:10:06.219226  616253 command_runner.go:130] > # metrics_cert = ""
	I0520 13:10:06.219233  616253 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0520 13:10:06.219244  616253 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0520 13:10:06.219249  616253 command_runner.go:130] > # metrics_key = ""
	I0520 13:10:06.219261  616253 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0520 13:10:06.219271  616253 command_runner.go:130] > [crio.tracing]
	I0520 13:10:06.219279  616253 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0520 13:10:06.219288  616253 command_runner.go:130] > # enable_tracing = false
	I0520 13:10:06.219296  616253 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0520 13:10:06.219305  616253 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0520 13:10:06.219318  616253 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0520 13:10:06.219328  616253 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0520 13:10:06.219335  616253 command_runner.go:130] > # CRI-O NRI configuration.
	I0520 13:10:06.219344  616253 command_runner.go:130] > [crio.nri]
	I0520 13:10:06.219351  616253 command_runner.go:130] > # Globally enable or disable NRI.
	I0520 13:10:06.219361  616253 command_runner.go:130] > # enable_nri = false
	I0520 13:10:06.219368  616253 command_runner.go:130] > # NRI socket to listen on.
	I0520 13:10:06.219378  616253 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0520 13:10:06.219389  616253 command_runner.go:130] > # NRI plugin directory to use.
	I0520 13:10:06.219397  616253 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0520 13:10:06.219407  616253 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0520 13:10:06.219416  616253 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0520 13:10:06.219426  616253 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0520 13:10:06.219435  616253 command_runner.go:130] > # nri_disable_connections = false
	I0520 13:10:06.219442  616253 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0520 13:10:06.219453  616253 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0520 13:10:06.219461  616253 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0520 13:10:06.219471  616253 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0520 13:10:06.219483  616253 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0520 13:10:06.219492  616253 command_runner.go:130] > [crio.stats]
	I0520 13:10:06.219505  616253 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0520 13:10:06.219516  616253 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0520 13:10:06.219529  616253 command_runner.go:130] > # stats_collection_period = 0
	I0520 13:10:06.219560  616253 command_runner.go:130] ! time="2024-05-20 13:10:06.195664658Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0520 13:10:06.219573  616253 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0520 13:10:06.219696  616253 cni.go:84] Creating CNI manager for ""
	I0520 13:10:06.219707  616253 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 13:10:06.219726  616253 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 13:10:06.219753  616253 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8441 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-694790 NodeName:functional-694790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 13:10:06.219910  616253 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-694790"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 13:10:06.219975  616253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 13:10:06.229517  616253 command_runner.go:130] > kubeadm
	I0520 13:10:06.229538  616253 command_runner.go:130] > kubectl
	I0520 13:10:06.229542  616253 command_runner.go:130] > kubelet
	I0520 13:10:06.229767  616253 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 13:10:06.229851  616253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 13:10:06.239554  616253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 13:10:06.257771  616253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:10:06.276416  616253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0520 13:10:06.294406  616253 ssh_runner.go:195] Run: grep 192.168.39.165	control-plane.minikube.internal$ /etc/hosts
	I0520 13:10:06.298384  616253 command_runner.go:130] > 192.168.39.165	control-plane.minikube.internal
	I0520 13:10:06.298466  616253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:10:06.444516  616253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:10:06.459825  616253 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790 for IP: 192.168.39.165
	I0520 13:10:06.460002  616253 certs.go:194] generating shared ca certs ...
	I0520 13:10:06.460062  616253 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:10:06.460266  616253 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 13:10:06.460318  616253 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 13:10:06.460335  616253 certs.go:256] generating profile certs ...
	I0520 13:10:06.460469  616253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.key
	I0520 13:10:06.460554  616253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/apiserver.key.d272fba6
	I0520 13:10:06.460608  616253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/proxy-client.key
	I0520 13:10:06.460623  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 13:10:06.460641  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 13:10:06.460661  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 13:10:06.460679  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 13:10:06.460698  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 13:10:06.460731  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 13:10:06.460750  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 13:10:06.460767  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 13:10:06.460838  616253 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem (1338 bytes)
	W0520 13:10:06.460882  616253 certs.go:480] ignoring /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867_empty.pem, impossibly tiny 0 bytes
	I0520 13:10:06.460897  616253 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 13:10:06.460932  616253 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 13:10:06.460963  616253 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:10:06.460996  616253 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 13:10:06.461055  616253 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:10:06.461093  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /usr/share/ca-certificates/6098672.pem
	I0520 13:10:06.461113  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:10:06.461135  616253 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem -> /usr/share/ca-certificates/609867.pem
	I0520 13:10:06.461744  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:10:06.485749  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:10:06.508423  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:10:06.531309  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 13:10:06.554904  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 13:10:06.577419  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 13:10:06.599494  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:10:06.622370  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 13:10:06.645543  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /usr/share/ca-certificates/6098672.pem (1708 bytes)
	I0520 13:10:06.668978  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:10:06.691916  616253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem --> /usr/share/ca-certificates/609867.pem (1338 bytes)
	I0520 13:10:06.713971  616253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 13:10:06.729459  616253 ssh_runner.go:195] Run: openssl version
	I0520 13:10:06.735000  616253 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 13:10:06.735381  616253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6098672.pem && ln -fs /usr/share/ca-certificates/6098672.pem /etc/ssl/certs/6098672.pem"
	I0520 13:10:06.753128  616253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6098672.pem
	I0520 13:10:06.757508  616253 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 13:10:06.757733  616253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 13:10:06.757807  616253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6098672.pem
	I0520 13:10:06.762942  616253 command_runner.go:130] > 3ec20f2e
	I0520 13:10:06.763053  616253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6098672.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:10:06.771998  616253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:10:06.781768  616253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:10:06.785813  616253 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:10:06.785892  616253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:10:06.785945  616253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:10:06.790992  616253 command_runner.go:130] > b5213941
	I0520 13:10:06.791066  616253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:10:06.799691  616253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/609867.pem && ln -fs /usr/share/ca-certificates/609867.pem /etc/ssl/certs/609867.pem"
	I0520 13:10:06.809997  616253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/609867.pem
	I0520 13:10:06.814233  616253 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 13:10:06.814266  616253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 13:10:06.814302  616253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/609867.pem
	I0520 13:10:06.819637  616253 command_runner.go:130] > 51391683
	I0520 13:10:06.819717  616253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/609867.pem /etc/ssl/certs/51391683.0"
	I0520 13:10:06.828643  616253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:10:06.832994  616253 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:10:06.833025  616253 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0520 13:10:06.833031  616253 command_runner.go:130] > Device: 253,1	Inode: 2104342     Links: 1
	I0520 13:10:06.833038  616253 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 13:10:06.833047  616253 command_runner.go:130] > Access: 2024-05-20 13:07:50.431290266 +0000
	I0520 13:10:06.833056  616253 command_runner.go:130] > Modify: 2024-05-20 13:07:50.431290266 +0000
	I0520 13:10:06.833063  616253 command_runner.go:130] > Change: 2024-05-20 13:07:50.431290266 +0000
	I0520 13:10:06.833073  616253 command_runner.go:130] >  Birth: 2024-05-20 13:07:50.431290266 +0000
	I0520 13:10:06.833179  616253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 13:10:06.838619  616253 command_runner.go:130] > Certificate will not expire
	I0520 13:10:06.838730  616253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 13:10:06.844033  616253 command_runner.go:130] > Certificate will not expire
	I0520 13:10:06.844171  616253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 13:10:06.849741  616253 command_runner.go:130] > Certificate will not expire
	I0520 13:10:06.849814  616253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 13:10:06.855149  616253 command_runner.go:130] > Certificate will not expire
	I0520 13:10:06.855408  616253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 13:10:06.860980  616253 command_runner.go:130] > Certificate will not expire
	I0520 13:10:06.861051  616253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 13:10:06.866652  616253 command_runner.go:130] > Certificate will not expire
	I0520 13:10:06.866710  616253 kubeadm.go:391] StartCluster: {Name:functional-694790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:functional-694790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:10:06.866783  616253 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 13:10:06.866833  616253 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 13:10:06.902965  616253 command_runner.go:130] > 0c62efa8614b8a9332b7854cf566f0019f7bf9769580bec1fc8ca8128436ef71
	I0520 13:10:06.902989  616253 command_runner.go:130] > 994be8edcd6dc8ef46d99f9edf9002a4126475365cff14f88d3a04d621a1327d
	I0520 13:10:06.902995  616253 command_runner.go:130] > 2f0690832c93fdf31579092306b21563d063c9fe1bffdb5a8ec45484f5235b44
	I0520 13:10:06.903044  616253 command_runner.go:130] > 725502a8fab5f94c88c7a65658b3916b6b807344fa64f8ef380324d845067145
	I0520 13:10:06.903069  616253 command_runner.go:130] > 8bc22f563655c225e4d4887e10763bb0b1eca39ab8d0d7601d82b669e43e689f
	I0520 13:10:06.903081  616253 command_runner.go:130] > 8178b9248ec3d3dadab1b6998390cda60be94dd606fc9f3bd00b3ec46fdaba5d
	I0520 13:10:06.903093  616253 command_runner.go:130] > fea004b7d1708081f1e62b29a80ceb9f5be30b4a2bec0951e79fe26d77d3e428
	I0520 13:10:06.903107  616253 command_runner.go:130] > bd8dcb4e3314afac1a5e1eee4e96ff11f17bd70ba4d80d3dc5062377f01dbcf1
	I0520 13:10:06.903119  616253 command_runner.go:130] > 5353f272c983196bcbefbb85bb7a426173f7fa7ce23104785380f243e75fee32
	I0520 13:10:06.903131  616253 command_runner.go:130] > 88bc645ba8dc33e1953a1f03a31532c2fe5189427addb21d9aac04febf162b2c
	I0520 13:10:06.903180  616253 command_runner.go:130] > 6d1d6c466d0706c03ec1ca14158c2006d6050ab4e4d67c8894f2b88559394387
	I0520 13:10:06.903306  616253 command_runner.go:130] > f5a1bfc0038588fe036fb0a09d1f1319f63830dceb48128bcd013cf1daba9feb
	I0520 13:10:06.904853  616253 cri.go:89] found id: "0c62efa8614b8a9332b7854cf566f0019f7bf9769580bec1fc8ca8128436ef71"
	I0520 13:10:06.904870  616253 cri.go:89] found id: "994be8edcd6dc8ef46d99f9edf9002a4126475365cff14f88d3a04d621a1327d"
	I0520 13:10:06.904876  616253 cri.go:89] found id: "2f0690832c93fdf31579092306b21563d063c9fe1bffdb5a8ec45484f5235b44"
	I0520 13:10:06.904880  616253 cri.go:89] found id: "725502a8fab5f94c88c7a65658b3916b6b807344fa64f8ef380324d845067145"
	I0520 13:10:06.904884  616253 cri.go:89] found id: "8bc22f563655c225e4d4887e10763bb0b1eca39ab8d0d7601d82b669e43e689f"
	I0520 13:10:06.904889  616253 cri.go:89] found id: "8178b9248ec3d3dadab1b6998390cda60be94dd606fc9f3bd00b3ec46fdaba5d"
	I0520 13:10:06.904893  616253 cri.go:89] found id: "fea004b7d1708081f1e62b29a80ceb9f5be30b4a2bec0951e79fe26d77d3e428"
	I0520 13:10:06.904896  616253 cri.go:89] found id: "bd8dcb4e3314afac1a5e1eee4e96ff11f17bd70ba4d80d3dc5062377f01dbcf1"
	I0520 13:10:06.904900  616253 cri.go:89] found id: "5353f272c983196bcbefbb85bb7a426173f7fa7ce23104785380f243e75fee32"
	I0520 13:10:06.904911  616253 cri.go:89] found id: "88bc645ba8dc33e1953a1f03a31532c2fe5189427addb21d9aac04febf162b2c"
	I0520 13:10:06.904918  616253 cri.go:89] found id: "6d1d6c466d0706c03ec1ca14158c2006d6050ab4e4d67c8894f2b88559394387"
	I0520 13:10:06.904924  616253 cri.go:89] found id: "f5a1bfc0038588fe036fb0a09d1f1319f63830dceb48128bcd013cf1daba9feb"
	I0520 13:10:06.904929  616253 cri.go:89] found id: ""
	I0520 13:10:06.904982  616253 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.049714274Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211570049690594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d270b09-554c-45a3-bdf2-d6ac9eda25fa name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.050403177Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b3bec4a-1b3d-4db3-bcba-73ca42001658 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.050471836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b3bec4a-1b3d-4db3-bcba-73ca42001658 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.050588781Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f03baae9dfb6bcb34f891044f70988b2c7cd97edf342b3a5f5fa731fed99cc0,PodSandboxId:2d56e0cd0d8c34c5024ad94de0989d4bf9c782e28327d5893a0098970b47db8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716210892300213060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-694790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4ea942634483091ed77ba272a0c5e7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b98ee8961fc43d2fcb4d3d43836e3a1bf79518a63c099ecb98bc91963bdfb64,PodSandboxId:35160f12dce7f18f6b8ea241173fbafeec4eb8f34ed31aace32593ba31bfde36,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716210892273168786,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-694790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4456a398db073d1e11be716e521523,},Annotations:map[string]string{io.kubernetes.container.hash: 32885c91,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23539c78453cc6623f7895d4d1b03146d49882c624bce3a9730e467afa024ac,PodSandboxId:77f1d31ce3c51d48cd760dd023e8b573c5b8014c5d3900464cdd4d995393b90d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716210892251325346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-694790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497a4a0e59cbe8ddecfd8d6ee3a6e12e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a5fb84e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed0b515f65810d6ccc5255085d1940d13a15a0532100204167c58c9634e314c5,PodSandboxId:bfafe3618e526ed72a3097967d5e5552adf48b4cfea5191829a729f7c4c529ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716210609457139115,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-694790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497a4a0e59cbe8ddecfd8d6ee3a6e12e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a5fb84e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b3bec4a-1b3d-4db3-bcba-73ca42001658 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.090921998Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7dab6a1-6efd-491a-91e8-401b041bc909 name=/runtime.v1.RuntimeService/Version
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.091082524Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7dab6a1-6efd-491a-91e8-401b041bc909 name=/runtime.v1.RuntimeService/Version
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.093035002Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e8205aa8-c888-443c-8984-5ecb1029318c name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.093461351Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211570093431397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8205aa8-c888-443c-8984-5ecb1029318c name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.096468456Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfbd490a-f40f-4194-83cc-e205e0acba9b name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.096537434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfbd490a-f40f-4194-83cc-e205e0acba9b name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.096646258Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f03baae9dfb6bcb34f891044f70988b2c7cd97edf342b3a5f5fa731fed99cc0,PodSandboxId:2d56e0cd0d8c34c5024ad94de0989d4bf9c782e28327d5893a0098970b47db8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716210892300213060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-694790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4ea942634483091ed77ba272a0c5e7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b98ee8961fc43d2fcb4d3d43836e3a1bf79518a63c099ecb98bc91963bdfb64,PodSandboxId:35160f12dce7f18f6b8ea241173fbafeec4eb8f34ed31aace32593ba31bfde36,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716210892273168786,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-694790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4456a398db073d1e11be716e521523,},Annotations:map[string]string{io.kubernetes.container.hash: 32885c91,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23539c78453cc6623f7895d4d1b03146d49882c624bce3a9730e467afa024ac,PodSandboxId:77f1d31ce3c51d48cd760dd023e8b573c5b8014c5d3900464cdd4d995393b90d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716210892251325346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-694790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497a4a0e59cbe8ddecfd8d6ee3a6e12e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a5fb84e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed0b515f65810d6ccc5255085d1940d13a15a0532100204167c58c9634e314c5,PodSandboxId:bfafe3618e526ed72a3097967d5e5552adf48b4cfea5191829a729f7c4c529ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716210609457139115,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-694790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497a4a0e59cbe8ddecfd8d6ee3a6e12e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a5fb84e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dfbd490a-f40f-4194-83cc-e205e0acba9b name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.129048041Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc255a2f-34cd-4892-a0fd-b821e1fe32c1 name=/runtime.v1.RuntimeService/Version
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.129156562Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc255a2f-34cd-4892-a0fd-b821e1fe32c1 name=/runtime.v1.RuntimeService/Version
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.130272099Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0f8a1d8-7a18-4118-a8bb-beeef0ba3b17 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.130877228Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211570130850501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0f8a1d8-7a18-4118-a8bb-beeef0ba3b17 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.131633446Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2eb3ab4c-c9a7-446d-b354-c9d5e9d59fbb name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.131710049Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2eb3ab4c-c9a7-446d-b354-c9d5e9d59fbb name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.131821970Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f03baae9dfb6bcb34f891044f70988b2c7cd97edf342b3a5f5fa731fed99cc0,PodSandboxId:2d56e0cd0d8c34c5024ad94de0989d4bf9c782e28327d5893a0098970b47db8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716210892300213060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-694790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4ea942634483091ed77ba272a0c5e7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b98ee8961fc43d2fcb4d3d43836e3a1bf79518a63c099ecb98bc91963bdfb64,PodSandboxId:35160f12dce7f18f6b8ea241173fbafeec4eb8f34ed31aace32593ba31bfde36,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716210892273168786,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-694790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4456a398db073d1e11be716e521523,},Annotations:map[string]string{io.kubernetes.container.hash: 32885c91,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23539c78453cc6623f7895d4d1b03146d49882c624bce3a9730e467afa024ac,PodSandboxId:77f1d31ce3c51d48cd760dd023e8b573c5b8014c5d3900464cdd4d995393b90d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716210892251325346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-694790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497a4a0e59cbe8ddecfd8d6ee3a6e12e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a5fb84e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed0b515f65810d6ccc5255085d1940d13a15a0532100204167c58c9634e314c5,PodSandboxId:bfafe3618e526ed72a3097967d5e5552adf48b4cfea5191829a729f7c4c529ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716210609457139115,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-694790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497a4a0e59cbe8ddecfd8d6ee3a6e12e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a5fb84e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2eb3ab4c-c9a7-446d-b354-c9d5e9d59fbb name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.164958199Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=77a475ac-c6dd-458e-b68d-17018bf7b195 name=/runtime.v1.RuntimeService/Version
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.165136251Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=77a475ac-c6dd-458e-b68d-17018bf7b195 name=/runtime.v1.RuntimeService/Version
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.166493443Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a717cc8e-26ff-4c05-9e98-2798e86ad441 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.166856729Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211570166835328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a717cc8e-26ff-4c05-9e98-2798e86ad441 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.167477883Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc386d23-c890-4970-b81a-235fde1a6a65 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.167530384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc386d23-c890-4970-b81a-235fde1a6a65 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:26:10 functional-694790 crio[3272]: time="2024-05-20 13:26:10.167756141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f03baae9dfb6bcb34f891044f70988b2c7cd97edf342b3a5f5fa731fed99cc0,PodSandboxId:2d56e0cd0d8c34c5024ad94de0989d4bf9c782e28327d5893a0098970b47db8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716210892300213060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-694790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4ea942634483091ed77ba272a0c5e7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b98ee8961fc43d2fcb4d3d43836e3a1bf79518a63c099ecb98bc91963bdfb64,PodSandboxId:35160f12dce7f18f6b8ea241173fbafeec4eb8f34ed31aace32593ba31bfde36,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716210892273168786,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-694790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4456a398db073d1e11be716e521523,},Annotations:map[string]string{io.kubernetes.container.hash: 32885c91,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23539c78453cc6623f7895d4d1b03146d49882c624bce3a9730e467afa024ac,PodSandboxId:77f1d31ce3c51d48cd760dd023e8b573c5b8014c5d3900464cdd4d995393b90d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716210892251325346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-694790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497a4a0e59cbe8ddecfd8d6ee3a6e12e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a5fb84e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed0b515f65810d6ccc5255085d1940d13a15a0532100204167c58c9634e314c5,PodSandboxId:bfafe3618e526ed72a3097967d5e5552adf48b4cfea5191829a729f7c4c529ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716210609457139115,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-694790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497a4a0e59cbe8ddecfd8d6ee3a6e12e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a5fb84e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc386d23-c890-4970-b81a-235fde1a6a65 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	3f03baae9dfb6       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   11 minutes ago      Running             kube-scheduler      3                   2d56e0cd0d8c3       kube-scheduler-functional-694790
	0b98ee8961fc4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   11 minutes ago      Running             etcd                3                   35160f12dce7f       etcd-functional-694790
	b23539c78453c       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   11 minutes ago      Running             kube-apiserver      3                   77f1d31ce3c51       kube-apiserver-functional-694790
	ed0b515f65810       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   16 minutes ago      Exited              kube-apiserver      2                   bfafe3618e526       kube-apiserver-functional-694790
	
	
	==> describe nodes <==
	Name:               functional-694790
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-694790
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=functional-694790
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T13_14_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:14:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-694790
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:26:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:25:10 +0000   Mon, 20 May 2024 13:14:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:25:10 +0000   Mon, 20 May 2024 13:14:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:25:10 +0000   Mon, 20 May 2024 13:14:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:25:10 +0000   Mon, 20 May 2024 13:14:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    functional-694790
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7efc2930dce4d5d89e3567a5e97b011
	  System UUID:                d7efc293-0dce-4d5d-89e3-567a5e97b011
	  Boot ID:                    bfc935df-98ba-4048-9ad6-8f995a781ec6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-functional-694790                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kube-apiserver-functional-694790             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-functional-694790    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-functional-694790             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 11m   kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m   kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m   kubelet  Node functional-694790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m   kubelet  Node functional-694790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m   kubelet  Node functional-694790 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.252511] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +3.969494] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.350935] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.065529] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.988284] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.079988] kauditd_printk_skb: 69 callbacks suppressed
	[May20 13:08] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.419414] systemd-fstab-generator[1502]: Ignoring "noauto" option for root device
	[ +11.456753] kauditd_printk_skb: 103 callbacks suppressed
	[  +9.454052] systemd-fstab-generator[2918]: Ignoring "noauto" option for root device
	[  +0.234363] systemd-fstab-generator[3013]: Ignoring "noauto" option for root device
	[  +0.305938] systemd-fstab-generator[3078]: Ignoring "noauto" option for root device
	[  +0.182485] systemd-fstab-generator[3094]: Ignoring "noauto" option for root device
	[  +0.304972] systemd-fstab-generator[3123]: Ignoring "noauto" option for root device
	[May20 13:10] systemd-fstab-generator[3384]: Ignoring "noauto" option for root device
	[  +0.075927] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.174083] systemd-fstab-generator[3508]: Ignoring "noauto" option for root device
	[  +4.548637] kauditd_printk_skb: 79 callbacks suppressed
	[May20 13:11] kauditd_printk_skb: 35 callbacks suppressed
	[May20 13:14] systemd-fstab-generator[5842]: Ignoring "noauto" option for root device
	[  +6.055689] systemd-fstab-generator[6125]: Ignoring "noauto" option for root device
	[  +0.066361] kauditd_printk_skb: 63 callbacks suppressed
	[May20 13:15] systemd-fstab-generator[6751]: Ignoring "noauto" option for root device
	[  +0.084027] kauditd_printk_skb: 12 callbacks suppressed
	[May20 13:16] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [0b98ee8961fc43d2fcb4d3d43836e3a1bf79518a63c099ecb98bc91963bdfb64] <==
	{"level":"info","ts":"2024-05-20T13:14:52.639237Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-05-20T13:14:52.639694Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-05-20T13:14:52.641412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 switched to configuration voters=(18429775660708452854)"}
	{"level":"info","ts":"2024-05-20T13:14:52.64155Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"58f0a6b9f17e1f60","local-member-id":"ffc3b7517aaad9f6","added-peer-id":"ffc3b7517aaad9f6","added-peer-peer-urls":["https://192.168.39.165:2380"]}
	{"level":"info","ts":"2024-05-20T13:14:53.566631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-20T13:14:53.566684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-20T13:14:53.56673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 received MsgPreVoteResp from ffc3b7517aaad9f6 at term 1"}
	{"level":"info","ts":"2024-05-20T13:14:53.566745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became candidate at term 2"}
	{"level":"info","ts":"2024-05-20T13:14:53.566751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 received MsgVoteResp from ffc3b7517aaad9f6 at term 2"}
	{"level":"info","ts":"2024-05-20T13:14:53.566769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became leader at term 2"}
	{"level":"info","ts":"2024-05-20T13:14:53.566777Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ffc3b7517aaad9f6 elected leader ffc3b7517aaad9f6 at term 2"}
	{"level":"info","ts":"2024-05-20T13:14:53.568605Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ffc3b7517aaad9f6","local-member-attributes":"{Name:functional-694790 ClientURLs:[https://192.168.39.165:2379]}","request-path":"/0/members/ffc3b7517aaad9f6/attributes","cluster-id":"58f0a6b9f17e1f60","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T13:14:53.568772Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T13:14:53.568843Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:14:53.568787Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T13:14:53.570107Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"58f0a6b9f17e1f60","local-member-id":"ffc3b7517aaad9f6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:14:53.570186Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:14:53.570217Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:14:53.568809Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T13:14:53.570231Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T13:14:53.571176Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.165:2379"}
	{"level":"info","ts":"2024-05-20T13:14:53.572026Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T13:24:53.602298Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":379}
	{"level":"info","ts":"2024-05-20T13:24:53.611272Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":379,"took":"8.676675ms","hash":3593072449,"current-db-size-bytes":765952,"current-db-size":"766 kB","current-db-size-in-use-bytes":765952,"current-db-size-in-use":"766 kB"}
	{"level":"info","ts":"2024-05-20T13:24:53.611339Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3593072449,"revision":379,"compact-revision":-1}
	
	
	==> kernel <==
	 13:26:10 up 18 min,  0 users,  load average: 0.11, 0.14, 0.13
	Linux functional-694790 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b23539c78453cc6623f7895d4d1b03146d49882c624bce3a9730e467afa024ac] <==
	I0520 13:14:54.913529       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 13:14:54.913552       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 13:14:54.914810       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 13:14:54.915209       1 controller.go:615] quota admission added evaluator for: namespaces
	I0520 13:14:54.915376       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 13:14:54.915421       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 13:14:54.915487       1 aggregator.go:165] initial CRD sync complete...
	I0520 13:14:54.915505       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 13:14:54.915510       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 13:14:54.915515       1 cache.go:39] Caches are synced for autoregister controller
	I0520 13:14:54.915639       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 13:14:55.082957       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 13:14:55.722053       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0520 13:14:55.726366       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0520 13:14:55.726430       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 13:14:56.394094       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 13:14:56.449120       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 13:14:56.555492       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0520 13:14:56.563096       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.165]
	I0520 13:14:56.564144       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 13:14:56.568449       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 13:14:57.491335       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 13:14:57.513708       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 13:14:57.530402       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0520 13:14:57.540668       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-apiserver [ed0b515f65810d6ccc5255085d1940d13a15a0532100204167c58c9634e314c5] <==
	W0520 13:14:47.839185       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:47.859086       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:47.876075       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:47.966754       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:47.991397       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.002657       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.002742       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.014894       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.024135       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.034027       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.053312       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.125613       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.158346       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.203460       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.211623       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.224150       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.246365       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.269814       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.278751       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.335387       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.344326       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.555402       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.568602       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.655062       1 logging.go:59] [core] [Channel #9 SubChannel #10] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:14:48.698518       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-scheduler [3f03baae9dfb6bcb34f891044f70988b2c7cd97edf342b3a5f5fa731fed99cc0] <==
	W0520 13:14:54.844103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 13:14:54.844128       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 13:14:54.843797       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 13:14:54.844144       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 13:14:54.844202       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 13:14:54.844230       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 13:14:55.709846       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 13:14:55.709891       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 13:14:55.788038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 13:14:55.788082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 13:14:55.814440       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 13:14:55.814482       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 13:14:55.830078       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 13:14:55.830117       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 13:14:55.925746       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 13:14:55.925856       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 13:14:55.947689       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 13:14:55.947734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 13:14:55.975169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 13:14:55.975272       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 13:14:56.097166       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 13:14:56.097380       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 13:14:56.443866       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 13:14:56.443934       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0520 13:14:58.134773       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 13:24:57 functional-694790 kubelet[6132]: E0520 13:24:57.382657    6132 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:24:57 functional-694790 kubelet[6132]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:24:57 functional-694790 kubelet[6132]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:24:57 functional-694790 kubelet[6132]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:24:57 functional-694790 kubelet[6132]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:25:04 functional-694790 kubelet[6132]: E0520 13:25:04.376294    6132 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-functional-694790_kube-system_5f8650130b9e5d5da93cc7f2b708a785_1\" is already in use by 96b092783f3ab4bb8132d478ab34ed4182c7d08cd8b970627969437363475a68. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="9cf7acc416082721cd1cbe8231b379f9cf6ac46d3ba05d4071cb3e38a96df150"
	May 20 13:25:04 functional-694790 kubelet[6132]: E0520 13:25:04.376491    6132 kuberuntime_manager.go:1256] container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.30.1,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-cred
entials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessP
robe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-functional-694790_kube-system(5f8
650130b9e5d5da93cc7f2b708a785): CreateContainerError: the container name "k8s_kube-controller-manager_kube-controller-manager-functional-694790_kube-system_5f8650130b9e5d5da93cc7f2b708a785_1" is already in use by 96b092783f3ab4bb8132d478ab34ed4182c7d08cd8b970627969437363475a68. You have to remove that container to be able to reuse that name: that name is already in use
	May 20 13:25:04 functional-694790 kubelet[6132]: E0520 13:25:04.376522    6132 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-functional-694790_kube-system_5f8650130b9e5d5da93cc7f2b708a785_1\\\" is already in use by 96b092783f3ab4bb8132d478ab34ed4182c7d08cd8b970627969437363475a68. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-functional-694790" podUID="5f8650130b9e5d5da93cc7f2b708a785"
	May 20 13:25:18 functional-694790 kubelet[6132]: E0520 13:25:18.374628    6132 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-functional-694790_kube-system_5f8650130b9e5d5da93cc7f2b708a785_1\" is already in use by 96b092783f3ab4bb8132d478ab34ed4182c7d08cd8b970627969437363475a68. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="9cf7acc416082721cd1cbe8231b379f9cf6ac46d3ba05d4071cb3e38a96df150"
	May 20 13:25:18 functional-694790 kubelet[6132]: E0520 13:25:18.375172    6132 kuberuntime_manager.go:1256] container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.30.1,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-cred
entials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessP
robe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-functional-694790_kube-system(5f8
650130b9e5d5da93cc7f2b708a785): CreateContainerError: the container name "k8s_kube-controller-manager_kube-controller-manager-functional-694790_kube-system_5f8650130b9e5d5da93cc7f2b708a785_1" is already in use by 96b092783f3ab4bb8132d478ab34ed4182c7d08cd8b970627969437363475a68. You have to remove that container to be able to reuse that name: that name is already in use
	May 20 13:25:18 functional-694790 kubelet[6132]: E0520 13:25:18.375280    6132 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-functional-694790_kube-system_5f8650130b9e5d5da93cc7f2b708a785_1\\\" is already in use by 96b092783f3ab4bb8132d478ab34ed4182c7d08cd8b970627969437363475a68. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-functional-694790" podUID="5f8650130b9e5d5da93cc7f2b708a785"
	May 20 13:25:33 functional-694790 kubelet[6132]: E0520 13:25:33.374845    6132 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-functional-694790_kube-system_5f8650130b9e5d5da93cc7f2b708a785_1\" is already in use by 96b092783f3ab4bb8132d478ab34ed4182c7d08cd8b970627969437363475a68. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="9cf7acc416082721cd1cbe8231b379f9cf6ac46d3ba05d4071cb3e38a96df150"
	May 20 13:25:33 functional-694790 kubelet[6132]: E0520 13:25:33.375310    6132 kuberuntime_manager.go:1256] container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.30.1,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-cred
entials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessP
robe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-functional-694790_kube-system(5f8
650130b9e5d5da93cc7f2b708a785): CreateContainerError: the container name "k8s_kube-controller-manager_kube-controller-manager-functional-694790_kube-system_5f8650130b9e5d5da93cc7f2b708a785_1" is already in use by 96b092783f3ab4bb8132d478ab34ed4182c7d08cd8b970627969437363475a68. You have to remove that container to be able to reuse that name: that name is already in use
	May 20 13:25:33 functional-694790 kubelet[6132]: E0520 13:25:33.375414    6132 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-functional-694790_kube-system_5f8650130b9e5d5da93cc7f2b708a785_1\\\" is already in use by 96b092783f3ab4bb8132d478ab34ed4182c7d08cd8b970627969437363475a68. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-functional-694790" podUID="5f8650130b9e5d5da93cc7f2b708a785"
	May 20 13:25:47 functional-694790 kubelet[6132]: E0520 13:25:47.376519    6132 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-functional-694790_kube-system_5f8650130b9e5d5da93cc7f2b708a785_1\" is already in use by 96b092783f3ab4bb8132d478ab34ed4182c7d08cd8b970627969437363475a68. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="9cf7acc416082721cd1cbe8231b379f9cf6ac46d3ba05d4071cb3e38a96df150"
	May 20 13:25:47 functional-694790 kubelet[6132]: E0520 13:25:47.376732    6132 kuberuntime_manager.go:1256] container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.30.1,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-cred
entials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessP
robe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-functional-694790_kube-system(5f8
650130b9e5d5da93cc7f2b708a785): CreateContainerError: the container name "k8s_kube-controller-manager_kube-controller-manager-functional-694790_kube-system_5f8650130b9e5d5da93cc7f2b708a785_1" is already in use by 96b092783f3ab4bb8132d478ab34ed4182c7d08cd8b970627969437363475a68. You have to remove that container to be able to reuse that name: that name is already in use
	May 20 13:25:47 functional-694790 kubelet[6132]: E0520 13:25:47.376766    6132 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-functional-694790_kube-system_5f8650130b9e5d5da93cc7f2b708a785_1\\\" is already in use by 96b092783f3ab4bb8132d478ab34ed4182c7d08cd8b970627969437363475a68. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-functional-694790" podUID="5f8650130b9e5d5da93cc7f2b708a785"
	May 20 13:25:57 functional-694790 kubelet[6132]: E0520 13:25:57.381626    6132 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:25:57 functional-694790 kubelet[6132]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:25:57 functional-694790 kubelet[6132]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:25:57 functional-694790 kubelet[6132]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:25:57 functional-694790 kubelet[6132]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:26:02 functional-694790 kubelet[6132]: E0520 13:26:02.373915    6132 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-functional-694790_kube-system_5f8650130b9e5d5da93cc7f2b708a785_1\" is already in use by 96b092783f3ab4bb8132d478ab34ed4182c7d08cd8b970627969437363475a68. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="9cf7acc416082721cd1cbe8231b379f9cf6ac46d3ba05d4071cb3e38a96df150"
	May 20 13:26:02 functional-694790 kubelet[6132]: E0520 13:26:02.374403    6132 kuberuntime_manager.go:1256] container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.30.1,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-cred
entials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessP
robe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-functional-694790_kube-system(5f8
650130b9e5d5da93cc7f2b708a785): CreateContainerError: the container name "k8s_kube-controller-manager_kube-controller-manager-functional-694790_kube-system_5f8650130b9e5d5da93cc7f2b708a785_1" is already in use by 96b092783f3ab4bb8132d478ab34ed4182c7d08cd8b970627969437363475a68. You have to remove that container to be able to reuse that name: that name is already in use
	May 20 13:26:02 functional-694790 kubelet[6132]: E0520 13:26:02.374508    6132 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-functional-694790_kube-system_5f8650130b9e5d5da93cc7f2b708a785_1\\\" is already in use by 96b092783f3ab4bb8132d478ab34ed4182c7d08cd8b970627969437363475a68. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-functional-694790" podUID="5f8650130b9e5d5da93cc7f2b708a785"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 13:26:09.739322  620341 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18929-602525/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-694790 -n functional-694790
helpers_test.go:261: (dbg) Run:  kubectl --context functional-694790 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: kube-controller-manager-functional-694790 storage-provisioner
helpers_test.go:274: ======> post-mortem[TestFunctional/serial/SoftStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-694790 describe pod kube-controller-manager-functional-694790 storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-694790 describe pod kube-controller-manager-functional-694790 storage-provisioner: exit status 1 (59.134525ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-controller-manager-functional-694790" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-694790 describe pod kube-controller-manager-functional-694790 storage-provisioner: exit status 1
--- FAIL: TestFunctional/serial/SoftStart (1064.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 node stop m02 -v=7 --alsologtostderr
E0520 13:32:40.722151  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 13:33:01.807123  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:33:21.682764  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-170194 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.495418965s)

                                                
                                                
-- stdout --
	* Stopping node "ha-170194-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:32:28.531683  628153 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:32:28.531982  628153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:32:28.531992  628153 out.go:304] Setting ErrFile to fd 2...
	I0520 13:32:28.531997  628153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:32:28.532270  628153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:32:28.532673  628153 mustload.go:65] Loading cluster: ha-170194
	I0520 13:32:28.533199  628153 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:32:28.533223  628153 stop.go:39] StopHost: ha-170194-m02
	I0520 13:32:28.533722  628153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:32:28.533784  628153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:32:28.552723  628153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0520 13:32:28.553222  628153 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:32:28.553959  628153 main.go:141] libmachine: Using API Version  1
	I0520 13:32:28.553980  628153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:32:28.554541  628153 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:32:28.558057  628153 out.go:177] * Stopping node "ha-170194-m02"  ...
	I0520 13:32:28.560572  628153 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 13:32:28.560622  628153 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:32:28.560983  628153 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 13:32:28.561045  628153 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:32:28.564441  628153 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:32:28.565047  628153 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:32:28.565092  628153 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:32:28.565495  628153 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:32:28.565798  628153 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:32:28.565962  628153 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:32:28.566146  628153 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	I0520 13:32:28.648955  628153 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 13:32:28.703270  628153 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0520 13:32:28.758874  628153 main.go:141] libmachine: Stopping "ha-170194-m02"...
	I0520 13:32:28.758932  628153 main.go:141] libmachine: (ha-170194-m02) Calling .GetState
	I0520 13:32:28.760818  628153 main.go:141] libmachine: (ha-170194-m02) Calling .Stop
	I0520 13:32:28.765500  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 0/120
	I0520 13:32:29.768106  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 1/120
	I0520 13:32:30.769584  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 2/120
	I0520 13:32:31.771920  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 3/120
	I0520 13:32:32.773212  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 4/120
	I0520 13:32:33.775356  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 5/120
	I0520 13:32:34.776783  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 6/120
	I0520 13:32:35.778348  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 7/120
	I0520 13:32:36.779711  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 8/120
	I0520 13:32:37.782223  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 9/120
	I0520 13:32:38.784465  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 10/120
	I0520 13:32:39.786393  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 11/120
	I0520 13:32:40.787862  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 12/120
	I0520 13:32:41.789636  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 13/120
	I0520 13:32:42.792297  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 14/120
	I0520 13:32:43.794556  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 15/120
	I0520 13:32:44.796090  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 16/120
	I0520 13:32:45.797713  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 17/120
	I0520 13:32:46.799027  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 18/120
	I0520 13:32:47.800660  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 19/120
	I0520 13:32:48.802921  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 20/120
	I0520 13:32:49.804379  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 21/120
	I0520 13:32:50.805806  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 22/120
	I0520 13:32:51.808034  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 23/120
	I0520 13:32:52.809754  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 24/120
	I0520 13:32:53.811809  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 25/120
	I0520 13:32:54.813274  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 26/120
	I0520 13:32:55.814547  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 27/120
	I0520 13:32:56.816090  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 28/120
	I0520 13:32:57.817468  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 29/120
	I0520 13:32:58.819162  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 30/120
	I0520 13:32:59.820751  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 31/120
	I0520 13:33:00.822161  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 32/120
	I0520 13:33:01.824099  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 33/120
	I0520 13:33:02.825592  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 34/120
	I0520 13:33:03.826993  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 35/120
	I0520 13:33:04.828502  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 36/120
	I0520 13:33:05.830548  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 37/120
	I0520 13:33:06.832559  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 38/120
	I0520 13:33:07.834438  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 39/120
	I0520 13:33:08.836951  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 40/120
	I0520 13:33:09.838566  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 41/120
	I0520 13:33:10.839989  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 42/120
	I0520 13:33:11.841524  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 43/120
	I0520 13:33:12.842912  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 44/120
	I0520 13:33:13.845177  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 45/120
	I0520 13:33:14.846806  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 46/120
	I0520 13:33:15.848235  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 47/120
	I0520 13:33:16.849675  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 48/120
	I0520 13:33:17.851340  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 49/120
	I0520 13:33:18.853871  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 50/120
	I0520 13:33:19.856063  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 51/120
	I0520 13:33:20.857369  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 52/120
	I0520 13:33:21.859063  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 53/120
	I0520 13:33:22.860236  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 54/120
	I0520 13:33:23.862254  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 55/120
	I0520 13:33:24.863668  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 56/120
	I0520 13:33:25.865018  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 57/120
	I0520 13:33:26.866431  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 58/120
	I0520 13:33:27.868813  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 59/120
	I0520 13:33:28.871018  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 60/120
	I0520 13:33:29.872421  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 61/120
	I0520 13:33:30.873986  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 62/120
	I0520 13:33:31.875453  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 63/120
	I0520 13:33:32.877051  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 64/120
	I0520 13:33:33.879021  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 65/120
	I0520 13:33:34.881331  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 66/120
	I0520 13:33:35.882599  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 67/120
	I0520 13:33:36.884153  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 68/120
	I0520 13:33:37.885492  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 69/120
	I0520 13:33:38.887340  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 70/120
	I0520 13:33:39.889437  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 71/120
	I0520 13:33:40.891585  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 72/120
	I0520 13:33:41.892879  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 73/120
	I0520 13:33:42.894197  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 74/120
	I0520 13:33:43.896493  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 75/120
	I0520 13:33:44.898520  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 76/120
	I0520 13:33:45.901103  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 77/120
	I0520 13:33:46.903155  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 78/120
	I0520 13:33:47.905693  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 79/120
	I0520 13:33:48.908020  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 80/120
	I0520 13:33:49.909540  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 81/120
	I0520 13:33:50.911765  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 82/120
	I0520 13:33:51.913512  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 83/120
	I0520 13:33:52.914841  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 84/120
	I0520 13:33:53.916921  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 85/120
	I0520 13:33:54.918657  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 86/120
	I0520 13:33:55.920362  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 87/120
	I0520 13:33:56.922527  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 88/120
	I0520 13:33:57.923983  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 89/120
	I0520 13:33:58.926017  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 90/120
	I0520 13:33:59.927819  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 91/120
	I0520 13:34:00.929044  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 92/120
	I0520 13:34:01.930644  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 93/120
	I0520 13:34:02.932039  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 94/120
	I0520 13:34:03.934011  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 95/120
	I0520 13:34:04.935622  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 96/120
	I0520 13:34:05.936860  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 97/120
	I0520 13:34:06.938719  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 98/120
	I0520 13:34:07.940143  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 99/120
	I0520 13:34:08.942609  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 100/120
	I0520 13:34:09.944199  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 101/120
	I0520 13:34:10.945772  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 102/120
	I0520 13:34:11.947853  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 103/120
	I0520 13:34:12.950463  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 104/120
	I0520 13:34:13.952761  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 105/120
	I0520 13:34:14.954194  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 106/120
	I0520 13:34:15.955950  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 107/120
	I0520 13:34:16.957517  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 108/120
	I0520 13:34:17.959510  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 109/120
	I0520 13:34:18.961303  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 110/120
	I0520 13:34:19.962725  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 111/120
	I0520 13:34:20.964093  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 112/120
	I0520 13:34:21.965578  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 113/120
	I0520 13:34:22.967675  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 114/120
	I0520 13:34:23.969627  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 115/120
	I0520 13:34:24.971732  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 116/120
	I0520 13:34:25.973156  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 117/120
	I0520 13:34:26.974937  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 118/120
	I0520 13:34:27.976445  628153 main.go:141] libmachine: (ha-170194-m02) Waiting for machine to stop 119/120
	I0520 13:34:28.977073  628153 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0520 13:34:28.977310  628153 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-170194 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr
E0520 13:34:43.604670  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr: exit status 3 (19.115858551s)

                                                
                                                
-- stdout --
	ha-170194
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-170194-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:34:29.020913  628616 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:34:29.021180  628616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:34:29.021190  628616 out.go:304] Setting ErrFile to fd 2...
	I0520 13:34:29.021194  628616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:34:29.021430  628616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:34:29.021616  628616 out.go:298] Setting JSON to false
	I0520 13:34:29.021645  628616 mustload.go:65] Loading cluster: ha-170194
	I0520 13:34:29.021681  628616 notify.go:220] Checking for updates...
	I0520 13:34:29.022012  628616 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:34:29.022033  628616 status.go:255] checking status of ha-170194 ...
	I0520 13:34:29.022612  628616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:29.022697  628616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:29.040519  628616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33891
	I0520 13:34:29.040996  628616 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:29.041713  628616 main.go:141] libmachine: Using API Version  1
	I0520 13:34:29.041745  628616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:29.042122  628616 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:29.042349  628616 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:34:29.044080  628616 status.go:330] ha-170194 host status = "Running" (err=<nil>)
	I0520 13:34:29.044105  628616 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:34:29.044536  628616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:29.044585  628616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:29.060721  628616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44371
	I0520 13:34:29.061237  628616 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:29.061735  628616 main.go:141] libmachine: Using API Version  1
	I0520 13:34:29.061763  628616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:29.062088  628616 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:29.062293  628616 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:34:29.065026  628616 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:34:29.065573  628616 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:34:29.065592  628616 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:34:29.065746  628616 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:34:29.066032  628616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:29.066092  628616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:29.081135  628616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I0520 13:34:29.082023  628616 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:29.083276  628616 main.go:141] libmachine: Using API Version  1
	I0520 13:34:29.083328  628616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:29.084024  628616 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:29.084252  628616 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:34:29.084451  628616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:34:29.084475  628616 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:34:29.087571  628616 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:34:29.088020  628616 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:34:29.088047  628616 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:34:29.088172  628616 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:34:29.088335  628616 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:34:29.088509  628616 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:34:29.088639  628616 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:34:29.175010  628616 ssh_runner.go:195] Run: systemctl --version
	I0520 13:34:29.182153  628616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:34:29.198810  628616 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:34:29.198846  628616 api_server.go:166] Checking apiserver status ...
	I0520 13:34:29.198878  628616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:34:29.219061  628616 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0520 13:34:29.229372  628616 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:34:29.229424  628616 ssh_runner.go:195] Run: ls
	I0520 13:34:29.235093  628616 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:34:29.239486  628616 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:34:29.239511  628616 status.go:422] ha-170194 apiserver status = Running (err=<nil>)
	I0520 13:34:29.239520  628616 status.go:257] ha-170194 status: &{Name:ha-170194 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:34:29.239538  628616 status.go:255] checking status of ha-170194-m02 ...
	I0520 13:34:29.239815  628616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:29.239848  628616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:29.255469  628616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46717
	I0520 13:34:29.255895  628616 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:29.256361  628616 main.go:141] libmachine: Using API Version  1
	I0520 13:34:29.256383  628616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:29.256747  628616 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:29.256937  628616 main.go:141] libmachine: (ha-170194-m02) Calling .GetState
	I0520 13:34:29.258858  628616 status.go:330] ha-170194-m02 host status = "Running" (err=<nil>)
	I0520 13:34:29.258882  628616 host.go:66] Checking if "ha-170194-m02" exists ...
	I0520 13:34:29.259304  628616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:29.259351  628616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:29.275607  628616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I0520 13:34:29.276075  628616 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:29.276616  628616 main.go:141] libmachine: Using API Version  1
	I0520 13:34:29.276642  628616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:29.276926  628616 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:29.277122  628616 main.go:141] libmachine: (ha-170194-m02) Calling .GetIP
	I0520 13:34:29.280022  628616 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:34:29.280396  628616 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:34:29.280435  628616 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:34:29.280576  628616 host.go:66] Checking if "ha-170194-m02" exists ...
	I0520 13:34:29.281010  628616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:29.281064  628616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:29.297518  628616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38877
	I0520 13:34:29.298075  628616 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:29.298694  628616 main.go:141] libmachine: Using API Version  1
	I0520 13:34:29.298717  628616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:29.299141  628616 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:29.299392  628616 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:34:29.299607  628616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:34:29.299625  628616 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:34:29.302782  628616 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:34:29.303296  628616 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:34:29.303327  628616 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:34:29.303476  628616 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:34:29.303684  628616 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:34:29.303856  628616 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:34:29.304008  628616 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	W0520 13:34:47.725460  628616 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.155:22: connect: no route to host
	W0520 13:34:47.725601  628616 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	E0520 13:34:47.725625  628616 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:34:47.725635  628616 status.go:257] ha-170194-m02 status: &{Name:ha-170194-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 13:34:47.725661  628616 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:34:47.725673  628616 status.go:255] checking status of ha-170194-m03 ...
	I0520 13:34:47.726009  628616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:47.726048  628616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:47.741228  628616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0520 13:34:47.741750  628616 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:47.742339  628616 main.go:141] libmachine: Using API Version  1
	I0520 13:34:47.742367  628616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:47.742813  628616 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:47.743085  628616 main.go:141] libmachine: (ha-170194-m03) Calling .GetState
	I0520 13:34:47.744555  628616 status.go:330] ha-170194-m03 host status = "Running" (err=<nil>)
	I0520 13:34:47.744573  628616 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:34:47.744856  628616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:47.744896  628616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:47.761319  628616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37305
	I0520 13:34:47.761760  628616 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:47.762203  628616 main.go:141] libmachine: Using API Version  1
	I0520 13:34:47.762221  628616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:47.762523  628616 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:47.762705  628616 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:34:47.765741  628616 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:34:47.766218  628616 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:34:47.766241  628616 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:34:47.766366  628616 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:34:47.766662  628616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:47.766703  628616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:47.783449  628616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I0520 13:34:47.783986  628616 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:47.784570  628616 main.go:141] libmachine: Using API Version  1
	I0520 13:34:47.784595  628616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:47.784960  628616 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:47.785205  628616 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:34:47.785420  628616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:34:47.785453  628616 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:34:47.788817  628616 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:34:47.789317  628616 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:34:47.789350  628616 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:34:47.789494  628616 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:34:47.789677  628616 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:34:47.789884  628616 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:34:47.790035  628616 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:34:47.870953  628616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:34:47.891608  628616 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:34:47.891640  628616 api_server.go:166] Checking apiserver status ...
	I0520 13:34:47.891677  628616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:34:47.908093  628616 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0520 13:34:47.917846  628616 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:34:47.917906  628616 ssh_runner.go:195] Run: ls
	I0520 13:34:47.922414  628616 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:34:47.926903  628616 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:34:47.926928  628616 status.go:422] ha-170194-m03 apiserver status = Running (err=<nil>)
	I0520 13:34:47.926937  628616 status.go:257] ha-170194-m03 status: &{Name:ha-170194-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:34:47.926955  628616 status.go:255] checking status of ha-170194-m04 ...
	I0520 13:34:47.927256  628616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:47.927289  628616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:47.944527  628616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45039
	I0520 13:34:47.944996  628616 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:47.945489  628616 main.go:141] libmachine: Using API Version  1
	I0520 13:34:47.945512  628616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:47.945843  628616 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:47.946100  628616 main.go:141] libmachine: (ha-170194-m04) Calling .GetState
	I0520 13:34:47.947980  628616 status.go:330] ha-170194-m04 host status = "Running" (err=<nil>)
	I0520 13:34:47.947999  628616 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:34:47.948281  628616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:47.948317  628616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:47.963674  628616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44111
	I0520 13:34:47.964257  628616 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:47.964779  628616 main.go:141] libmachine: Using API Version  1
	I0520 13:34:47.964802  628616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:47.965140  628616 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:47.965369  628616 main.go:141] libmachine: (ha-170194-m04) Calling .GetIP
	I0520 13:34:47.968602  628616 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:34:47.969078  628616 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:34:47.969136  628616 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:34:47.969281  628616 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:34:47.969747  628616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:47.969794  628616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:47.987082  628616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42369
	I0520 13:34:47.987574  628616 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:47.988049  628616 main.go:141] libmachine: Using API Version  1
	I0520 13:34:47.988072  628616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:47.988371  628616 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:47.988522  628616 main.go:141] libmachine: (ha-170194-m04) Calling .DriverName
	I0520 13:34:47.988726  628616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:34:47.988753  628616 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHHostname
	I0520 13:34:47.991566  628616 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:34:47.992058  628616 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:34:47.992093  628616 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:34:47.992261  628616 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHPort
	I0520 13:34:47.992435  628616 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHKeyPath
	I0520 13:34:47.992602  628616 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHUsername
	I0520 13:34:47.992800  628616 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m04/id_rsa Username:docker}
	I0520 13:34:48.074590  628616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:34:48.091894  628616 status.go:257] ha-170194-m04 status: &{Name:ha-170194-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-170194 -n ha-170194
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-170194 logs -n 25: (1.529653735s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-170194 cp ha-170194-m03:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1524472985/001/cp-test_ha-170194-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m03:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194:/home/docker/cp-test_ha-170194-m03_ha-170194.txt                       |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194 sudo cat                                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m03_ha-170194.txt                                 |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m03:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m02:/home/docker/cp-test_ha-170194-m03_ha-170194-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194-m02 sudo cat                                          | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m03_ha-170194-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m03:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04:/home/docker/cp-test_ha-170194-m03_ha-170194-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194-m04 sudo cat                                          | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m03_ha-170194-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-170194 cp testdata/cp-test.txt                                                | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1524472985/001/cp-test_ha-170194-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194:/home/docker/cp-test_ha-170194-m04_ha-170194.txt                       |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194 sudo cat                                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m04_ha-170194.txt                                 |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m02:/home/docker/cp-test_ha-170194-m04_ha-170194-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194-m02 sudo cat                                          | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m04_ha-170194-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m03:/home/docker/cp-test_ha-170194-m04_ha-170194-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194-m03 sudo cat                                          | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m04_ha-170194-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-170194 node stop m02 -v=7                                                     | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 13:27:54
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 13:27:54.787808  624195 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:27:54.788072  624195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:27:54.788083  624195 out.go:304] Setting ErrFile to fd 2...
	I0520 13:27:54.788090  624195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:27:54.788302  624195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:27:54.788863  624195 out.go:298] Setting JSON to false
	I0520 13:27:54.789842  624195 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11415,"bootTime":1716200260,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 13:27:54.789902  624195 start.go:139] virtualization: kvm guest
	I0520 13:27:54.792915  624195 out.go:177] * [ha-170194] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 13:27:54.795227  624195 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 13:27:54.795193  624195 notify.go:220] Checking for updates...
	I0520 13:27:54.797364  624195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 13:27:54.799684  624195 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 13:27:54.801844  624195 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:27:54.803952  624195 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 13:27:54.805891  624195 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 13:27:54.807989  624195 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 13:27:54.843729  624195 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 13:27:54.845864  624195 start.go:297] selected driver: kvm2
	I0520 13:27:54.845891  624195 start.go:901] validating driver "kvm2" against <nil>
	I0520 13:27:54.845909  624195 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 13:27:54.846658  624195 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:27:54.846750  624195 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 13:27:54.862551  624195 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 13:27:54.862617  624195 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 13:27:54.862816  624195 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 13:27:54.862872  624195 cni.go:84] Creating CNI manager for ""
	I0520 13:27:54.862883  624195 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 13:27:54.862888  624195 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 13:27:54.862953  624195 start.go:340] cluster config:
	{Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0520 13:27:54.863053  624195 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:27:54.865627  624195 out.go:177] * Starting "ha-170194" primary control-plane node in "ha-170194" cluster
	I0520 13:27:54.867679  624195 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:27:54.867715  624195 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 13:27:54.867723  624195 cache.go:56] Caching tarball of preloaded images
	I0520 13:27:54.867784  624195 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:27:54.867794  624195 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 13:27:54.868073  624195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:27:54.868092  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json: {Name:mk4d4f049f9025d6d1dcc6479cee744453ad1838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:27:54.868224  624195 start.go:360] acquireMachinesLock for ha-170194: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:27:54.868254  624195 start.go:364] duration metric: took 16.059µs to acquireMachinesLock for "ha-170194"
	I0520 13:27:54.868268  624195 start.go:93] Provisioning new machine with config: &{Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:27:54.868333  624195 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 13:27:54.870806  624195 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 13:27:54.870938  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:27:54.870974  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:27:54.885147  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42089
	I0520 13:27:54.885650  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:27:54.886178  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:27:54.886201  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:27:54.886581  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:27:54.886848  624195 main.go:141] libmachine: (ha-170194) Calling .GetMachineName
	I0520 13:27:54.887034  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:27:54.887235  624195 start.go:159] libmachine.API.Create for "ha-170194" (driver="kvm2")
	I0520 13:27:54.887272  624195 client.go:168] LocalClient.Create starting
	I0520 13:27:54.887319  624195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem
	I0520 13:27:54.887353  624195 main.go:141] libmachine: Decoding PEM data...
	I0520 13:27:54.887379  624195 main.go:141] libmachine: Parsing certificate...
	I0520 13:27:54.887474  624195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem
	I0520 13:27:54.887508  624195 main.go:141] libmachine: Decoding PEM data...
	I0520 13:27:54.887522  624195 main.go:141] libmachine: Parsing certificate...
	I0520 13:27:54.887544  624195 main.go:141] libmachine: Running pre-create checks...
	I0520 13:27:54.887555  624195 main.go:141] libmachine: (ha-170194) Calling .PreCreateCheck
	I0520 13:27:54.887917  624195 main.go:141] libmachine: (ha-170194) Calling .GetConfigRaw
	I0520 13:27:54.888401  624195 main.go:141] libmachine: Creating machine...
	I0520 13:27:54.888423  624195 main.go:141] libmachine: (ha-170194) Calling .Create
	I0520 13:27:54.888571  624195 main.go:141] libmachine: (ha-170194) Creating KVM machine...
	I0520 13:27:54.889884  624195 main.go:141] libmachine: (ha-170194) DBG | found existing default KVM network
	I0520 13:27:54.890580  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:54.890457  624219 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c10}
	I0520 13:27:54.890611  624195 main.go:141] libmachine: (ha-170194) DBG | created network xml: 
	I0520 13:27:54.890624  624195 main.go:141] libmachine: (ha-170194) DBG | <network>
	I0520 13:27:54.890637  624195 main.go:141] libmachine: (ha-170194) DBG |   <name>mk-ha-170194</name>
	I0520 13:27:54.890646  624195 main.go:141] libmachine: (ha-170194) DBG |   <dns enable='no'/>
	I0520 13:27:54.890663  624195 main.go:141] libmachine: (ha-170194) DBG |   
	I0520 13:27:54.890675  624195 main.go:141] libmachine: (ha-170194) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 13:27:54.890690  624195 main.go:141] libmachine: (ha-170194) DBG |     <dhcp>
	I0520 13:27:54.890711  624195 main.go:141] libmachine: (ha-170194) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 13:27:54.890728  624195 main.go:141] libmachine: (ha-170194) DBG |     </dhcp>
	I0520 13:27:54.890738  624195 main.go:141] libmachine: (ha-170194) DBG |   </ip>
	I0520 13:27:54.890748  624195 main.go:141] libmachine: (ha-170194) DBG |   
	I0520 13:27:54.890761  624195 main.go:141] libmachine: (ha-170194) DBG | </network>
	I0520 13:27:54.890777  624195 main.go:141] libmachine: (ha-170194) DBG | 
	I0520 13:27:54.896065  624195 main.go:141] libmachine: (ha-170194) DBG | trying to create private KVM network mk-ha-170194 192.168.39.0/24...
	I0520 13:27:54.967027  624195 main.go:141] libmachine: (ha-170194) DBG | private KVM network mk-ha-170194 192.168.39.0/24 created
	I0520 13:27:54.967086  624195 main.go:141] libmachine: (ha-170194) Setting up store path in /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194 ...
	I0520 13:27:54.967102  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:54.966962  624219 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:27:54.967468  624195 main.go:141] libmachine: (ha-170194) Building disk image from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 13:27:54.967612  624195 main.go:141] libmachine: (ha-170194) Downloading /home/jenkins/minikube-integration/18929-602525/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 13:27:55.252359  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:55.252215  624219 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa...
	I0520 13:27:55.368707  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:55.368606  624219 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/ha-170194.rawdisk...
	I0520 13:27:55.368742  624195 main.go:141] libmachine: (ha-170194) DBG | Writing magic tar header
	I0520 13:27:55.368754  624195 main.go:141] libmachine: (ha-170194) DBG | Writing SSH key tar header
	I0520 13:27:55.368766  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:55.368730  624219 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194 ...
	I0520 13:27:55.368900  624195 main.go:141] libmachine: (ha-170194) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194
	I0520 13:27:55.368933  624195 main.go:141] libmachine: (ha-170194) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194 (perms=drwx------)
	I0520 13:27:55.368949  624195 main.go:141] libmachine: (ha-170194) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines
	I0520 13:27:55.368963  624195 main.go:141] libmachine: (ha-170194) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:27:55.368976  624195 main.go:141] libmachine: (ha-170194) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525
	I0520 13:27:55.368992  624195 main.go:141] libmachine: (ha-170194) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 13:27:55.369000  624195 main.go:141] libmachine: (ha-170194) DBG | Checking permissions on dir: /home/jenkins
	I0520 13:27:55.369009  624195 main.go:141] libmachine: (ha-170194) DBG | Checking permissions on dir: /home
	I0520 13:27:55.369015  624195 main.go:141] libmachine: (ha-170194) DBG | Skipping /home - not owner
	I0520 13:27:55.369027  624195 main.go:141] libmachine: (ha-170194) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines (perms=drwxr-xr-x)
	I0520 13:27:55.369043  624195 main.go:141] libmachine: (ha-170194) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube (perms=drwxr-xr-x)
	I0520 13:27:55.369057  624195 main.go:141] libmachine: (ha-170194) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525 (perms=drwxrwxr-x)
	I0520 13:27:55.369071  624195 main.go:141] libmachine: (ha-170194) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 13:27:55.369084  624195 main.go:141] libmachine: (ha-170194) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 13:27:55.369092  624195 main.go:141] libmachine: (ha-170194) Creating domain...
	I0520 13:27:55.370115  624195 main.go:141] libmachine: (ha-170194) define libvirt domain using xml: 
	I0520 13:27:55.370139  624195 main.go:141] libmachine: (ha-170194) <domain type='kvm'>
	I0520 13:27:55.370149  624195 main.go:141] libmachine: (ha-170194)   <name>ha-170194</name>
	I0520 13:27:55.370158  624195 main.go:141] libmachine: (ha-170194)   <memory unit='MiB'>2200</memory>
	I0520 13:27:55.370167  624195 main.go:141] libmachine: (ha-170194)   <vcpu>2</vcpu>
	I0520 13:27:55.370174  624195 main.go:141] libmachine: (ha-170194)   <features>
	I0520 13:27:55.370183  624195 main.go:141] libmachine: (ha-170194)     <acpi/>
	I0520 13:27:55.370190  624195 main.go:141] libmachine: (ha-170194)     <apic/>
	I0520 13:27:55.370200  624195 main.go:141] libmachine: (ha-170194)     <pae/>
	I0520 13:27:55.370208  624195 main.go:141] libmachine: (ha-170194)     
	I0520 13:27:55.370218  624195 main.go:141] libmachine: (ha-170194)   </features>
	I0520 13:27:55.370224  624195 main.go:141] libmachine: (ha-170194)   <cpu mode='host-passthrough'>
	I0520 13:27:55.370229  624195 main.go:141] libmachine: (ha-170194)   
	I0520 13:27:55.370237  624195 main.go:141] libmachine: (ha-170194)   </cpu>
	I0520 13:27:55.370244  624195 main.go:141] libmachine: (ha-170194)   <os>
	I0520 13:27:55.370252  624195 main.go:141] libmachine: (ha-170194)     <type>hvm</type>
	I0520 13:27:55.370308  624195 main.go:141] libmachine: (ha-170194)     <boot dev='cdrom'/>
	I0520 13:27:55.370344  624195 main.go:141] libmachine: (ha-170194)     <boot dev='hd'/>
	I0520 13:27:55.370384  624195 main.go:141] libmachine: (ha-170194)     <bootmenu enable='no'/>
	I0520 13:27:55.370412  624195 main.go:141] libmachine: (ha-170194)   </os>
	I0520 13:27:55.370422  624195 main.go:141] libmachine: (ha-170194)   <devices>
	I0520 13:27:55.370433  624195 main.go:141] libmachine: (ha-170194)     <disk type='file' device='cdrom'>
	I0520 13:27:55.370451  624195 main.go:141] libmachine: (ha-170194)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/boot2docker.iso'/>
	I0520 13:27:55.370464  624195 main.go:141] libmachine: (ha-170194)       <target dev='hdc' bus='scsi'/>
	I0520 13:27:55.370475  624195 main.go:141] libmachine: (ha-170194)       <readonly/>
	I0520 13:27:55.370489  624195 main.go:141] libmachine: (ha-170194)     </disk>
	I0520 13:27:55.370504  624195 main.go:141] libmachine: (ha-170194)     <disk type='file' device='disk'>
	I0520 13:27:55.370516  624195 main.go:141] libmachine: (ha-170194)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 13:27:55.370532  624195 main.go:141] libmachine: (ha-170194)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/ha-170194.rawdisk'/>
	I0520 13:27:55.370542  624195 main.go:141] libmachine: (ha-170194)       <target dev='hda' bus='virtio'/>
	I0520 13:27:55.370553  624195 main.go:141] libmachine: (ha-170194)     </disk>
	I0520 13:27:55.370567  624195 main.go:141] libmachine: (ha-170194)     <interface type='network'>
	I0520 13:27:55.370580  624195 main.go:141] libmachine: (ha-170194)       <source network='mk-ha-170194'/>
	I0520 13:27:55.370590  624195 main.go:141] libmachine: (ha-170194)       <model type='virtio'/>
	I0520 13:27:55.370602  624195 main.go:141] libmachine: (ha-170194)     </interface>
	I0520 13:27:55.370612  624195 main.go:141] libmachine: (ha-170194)     <interface type='network'>
	I0520 13:27:55.370623  624195 main.go:141] libmachine: (ha-170194)       <source network='default'/>
	I0520 13:27:55.370636  624195 main.go:141] libmachine: (ha-170194)       <model type='virtio'/>
	I0520 13:27:55.370647  624195 main.go:141] libmachine: (ha-170194)     </interface>
	I0520 13:27:55.370657  624195 main.go:141] libmachine: (ha-170194)     <serial type='pty'>
	I0520 13:27:55.370669  624195 main.go:141] libmachine: (ha-170194)       <target port='0'/>
	I0520 13:27:55.370678  624195 main.go:141] libmachine: (ha-170194)     </serial>
	I0520 13:27:55.370690  624195 main.go:141] libmachine: (ha-170194)     <console type='pty'>
	I0520 13:27:55.370701  624195 main.go:141] libmachine: (ha-170194)       <target type='serial' port='0'/>
	I0520 13:27:55.370711  624195 main.go:141] libmachine: (ha-170194)     </console>
	I0520 13:27:55.370721  624195 main.go:141] libmachine: (ha-170194)     <rng model='virtio'>
	I0520 13:27:55.370732  624195 main.go:141] libmachine: (ha-170194)       <backend model='random'>/dev/random</backend>
	I0520 13:27:55.370741  624195 main.go:141] libmachine: (ha-170194)     </rng>
	I0520 13:27:55.370749  624195 main.go:141] libmachine: (ha-170194)     
	I0520 13:27:55.370759  624195 main.go:141] libmachine: (ha-170194)     
	I0520 13:27:55.370767  624195 main.go:141] libmachine: (ha-170194)   </devices>
	I0520 13:27:55.370774  624195 main.go:141] libmachine: (ha-170194) </domain>
	I0520 13:27:55.370779  624195 main.go:141] libmachine: (ha-170194) 
	I0520 13:27:55.375705  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:d6:7b:51 in network default
	I0520 13:27:55.376247  624195 main.go:141] libmachine: (ha-170194) Ensuring networks are active...
	I0520 13:27:55.376271  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:27:55.376855  624195 main.go:141] libmachine: (ha-170194) Ensuring network default is active
	I0520 13:27:55.377222  624195 main.go:141] libmachine: (ha-170194) Ensuring network mk-ha-170194 is active
	I0520 13:27:55.377700  624195 main.go:141] libmachine: (ha-170194) Getting domain xml...
	I0520 13:27:55.378335  624195 main.go:141] libmachine: (ha-170194) Creating domain...
	I0520 13:27:56.557336  624195 main.go:141] libmachine: (ha-170194) Waiting to get IP...
	I0520 13:27:56.558101  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:27:56.558467  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:27:56.558559  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:56.558473  624219 retry.go:31] will retry after 230.582871ms: waiting for machine to come up
	I0520 13:27:56.790941  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:27:56.791484  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:27:56.791514  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:56.791443  624219 retry.go:31] will retry after 355.829641ms: waiting for machine to come up
	I0520 13:27:57.149070  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:27:57.149476  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:27:57.149502  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:57.149420  624219 retry.go:31] will retry after 344.241691ms: waiting for machine to come up
	I0520 13:27:57.494945  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:27:57.495413  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:27:57.495449  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:57.495342  624219 retry.go:31] will retry after 542.878171ms: waiting for machine to come up
	I0520 13:27:58.040037  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:27:58.040469  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:27:58.040498  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:58.040418  624219 retry.go:31] will retry after 500.259105ms: waiting for machine to come up
	I0520 13:27:58.542079  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:27:58.542505  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:27:58.542538  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:58.542436  624219 retry.go:31] will retry after 931.085496ms: waiting for machine to come up
	I0520 13:27:59.475499  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:27:59.475935  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:27:59.475975  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:59.475876  624219 retry.go:31] will retry after 721.553184ms: waiting for machine to come up
	I0520 13:28:00.199611  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:00.200101  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:28:00.200127  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:28:00.200042  624219 retry.go:31] will retry after 1.117618537s: waiting for machine to come up
	I0520 13:28:01.319380  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:01.319842  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:28:01.319873  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:28:01.319774  624219 retry.go:31] will retry after 1.394871155s: waiting for machine to come up
	I0520 13:28:02.717949  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:02.718384  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:28:02.718411  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:28:02.718352  624219 retry.go:31] will retry after 1.47499546s: waiting for machine to come up
	I0520 13:28:04.195297  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:04.195762  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:28:04.195792  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:28:04.195710  624219 retry.go:31] will retry after 1.787841557s: waiting for machine to come up
	I0520 13:28:05.985640  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:05.986161  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:28:05.986192  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:28:05.986100  624219 retry.go:31] will retry after 2.914900147s: waiting for machine to come up
	I0520 13:28:08.904215  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:08.904590  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:28:08.904609  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:28:08.904554  624219 retry.go:31] will retry after 3.774056973s: waiting for machine to come up
	I0520 13:28:12.682006  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:12.682480  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:28:12.682506  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:28:12.682442  624219 retry.go:31] will retry after 3.776735044s: waiting for machine to come up
	I0520 13:28:16.461298  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.461814  624195 main.go:141] libmachine: (ha-170194) Found IP for machine: 192.168.39.92
	I0520 13:28:16.461838  624195 main.go:141] libmachine: (ha-170194) Reserving static IP address...
	I0520 13:28:16.461851  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has current primary IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.462231  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find host DHCP lease matching {name: "ha-170194", mac: "52:54:00:4b:8c:ad", ip: "192.168.39.92"} in network mk-ha-170194
	I0520 13:28:16.538038  624195 main.go:141] libmachine: (ha-170194) DBG | Getting to WaitForSSH function...
	I0520 13:28:16.538071  624195 main.go:141] libmachine: (ha-170194) Reserved static IP address: 192.168.39.92
	I0520 13:28:16.538086  624195 main.go:141] libmachine: (ha-170194) Waiting for SSH to be available...
	I0520 13:28:16.540602  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.541069  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:16.541291  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.541330  624195 main.go:141] libmachine: (ha-170194) DBG | Using SSH client type: external
	I0520 13:28:16.541352  624195 main.go:141] libmachine: (ha-170194) DBG | Using SSH private key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa (-rw-------)
	I0520 13:28:16.541378  624195 main.go:141] libmachine: (ha-170194) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 13:28:16.541391  624195 main.go:141] libmachine: (ha-170194) DBG | About to run SSH command:
	I0520 13:28:16.541400  624195 main.go:141] libmachine: (ha-170194) DBG | exit 0
	I0520 13:28:16.665187  624195 main.go:141] libmachine: (ha-170194) DBG | SSH cmd err, output: <nil>: 
	I0520 13:28:16.665497  624195 main.go:141] libmachine: (ha-170194) KVM machine creation complete!
	I0520 13:28:16.665853  624195 main.go:141] libmachine: (ha-170194) Calling .GetConfigRaw
	I0520 13:28:16.666420  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:16.666630  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:16.666784  624195 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 13:28:16.666796  624195 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:28:16.668190  624195 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 13:28:16.668223  624195 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 13:28:16.668256  624195 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 13:28:16.668268  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:16.670743  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.671161  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:16.671199  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.671275  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:16.671492  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:16.671653  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:16.671790  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:16.671964  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:28:16.672292  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:28:16.672311  624195 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 13:28:16.776402  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:28:16.776427  624195 main.go:141] libmachine: Detecting the provisioner...
	I0520 13:28:16.776437  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:16.779402  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.779733  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:16.779757  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.779919  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:16.780113  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:16.780297  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:16.780415  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:16.780543  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:28:16.780724  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:28:16.780739  624195 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 13:28:16.877834  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 13:28:16.877917  624195 main.go:141] libmachine: found compatible host: buildroot
	I0520 13:28:16.877928  624195 main.go:141] libmachine: Provisioning with buildroot...
	I0520 13:28:16.877942  624195 main.go:141] libmachine: (ha-170194) Calling .GetMachineName
	I0520 13:28:16.878207  624195 buildroot.go:166] provisioning hostname "ha-170194"
	I0520 13:28:16.878241  624195 main.go:141] libmachine: (ha-170194) Calling .GetMachineName
	I0520 13:28:16.878464  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:16.881126  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.881567  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:16.881600  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.881708  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:16.881988  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:16.882137  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:16.882325  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:16.882495  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:28:16.882655  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:28:16.882667  624195 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-170194 && echo "ha-170194" | sudo tee /etc/hostname
	I0520 13:28:16.994455  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-170194
	
	I0520 13:28:16.994496  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:16.997222  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.997580  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:16.997603  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.997774  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:16.997979  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:16.998167  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:16.998322  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:16.998500  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:28:16.998684  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:28:16.998701  624195 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-170194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-170194/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-170194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 13:28:17.105422  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:28:17.105475  624195 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 13:28:17.105546  624195 buildroot.go:174] setting up certificates
	I0520 13:28:17.105562  624195 provision.go:84] configureAuth start
	I0520 13:28:17.105583  624195 main.go:141] libmachine: (ha-170194) Calling .GetMachineName
	I0520 13:28:17.105931  624195 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:28:17.108932  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.109437  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.109468  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.109666  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:17.111911  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.112297  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.112327  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.112453  624195 provision.go:143] copyHostCerts
	I0520 13:28:17.112483  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:28:17.112519  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 13:28:17.112527  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:28:17.112590  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 13:28:17.112665  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:28:17.112682  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 13:28:17.112689  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:28:17.112710  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 13:28:17.112754  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:28:17.112771  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 13:28:17.112779  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:28:17.112799  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 13:28:17.112844  624195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.ha-170194 san=[127.0.0.1 192.168.39.92 ha-170194 localhost minikube]
	I0520 13:28:17.183043  624195 provision.go:177] copyRemoteCerts
	I0520 13:28:17.183101  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 13:28:17.183127  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:17.185798  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.186268  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.186301  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.186430  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:17.186625  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:17.186765  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:17.186891  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:28:17.263716  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 13:28:17.263792  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 13:28:17.286709  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 13:28:17.286771  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0520 13:28:17.310154  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 13:28:17.310216  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 13:28:17.333534  624195 provision.go:87] duration metric: took 227.950346ms to configureAuth
	I0520 13:28:17.333565  624195 buildroot.go:189] setting minikube options for container-runtime
	I0520 13:28:17.333791  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:28:17.333904  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:17.336564  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.336892  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.336917  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.337113  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:17.337336  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:17.337505  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:17.337629  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:17.337762  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:28:17.337920  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:28:17.337933  624195 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 13:28:17.582807  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 13:28:17.582837  624195 main.go:141] libmachine: Checking connection to Docker...
	I0520 13:28:17.582845  624195 main.go:141] libmachine: (ha-170194) Calling .GetURL
	I0520 13:28:17.584038  624195 main.go:141] libmachine: (ha-170194) DBG | Using libvirt version 6000000
	I0520 13:28:17.586091  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.586396  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.586423  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.586565  624195 main.go:141] libmachine: Docker is up and running!
	I0520 13:28:17.586579  624195 main.go:141] libmachine: Reticulating splines...
	I0520 13:28:17.586586  624195 client.go:171] duration metric: took 22.699301504s to LocalClient.Create
	I0520 13:28:17.586611  624195 start.go:167] duration metric: took 22.699379662s to libmachine.API.Create "ha-170194"
	I0520 13:28:17.586621  624195 start.go:293] postStartSetup for "ha-170194" (driver="kvm2")
	I0520 13:28:17.586642  624195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 13:28:17.586660  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:17.586894  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 13:28:17.586924  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:17.589115  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.589437  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.589477  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.589573  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:17.589745  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:17.589886  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:17.590044  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:28:17.667163  624195 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 13:28:17.671217  624195 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 13:28:17.671240  624195 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 13:28:17.671299  624195 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 13:28:17.671368  624195 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 13:28:17.671378  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /etc/ssl/certs/6098672.pem
	I0520 13:28:17.671466  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 13:28:17.680210  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:28:17.701507  624195 start.go:296] duration metric: took 114.864585ms for postStartSetup
	I0520 13:28:17.701571  624195 main.go:141] libmachine: (ha-170194) Calling .GetConfigRaw
	I0520 13:28:17.702151  624195 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:28:17.704863  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.705211  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.705239  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.705507  624195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:28:17.705727  624195 start.go:128] duration metric: took 22.837382587s to createHost
	I0520 13:28:17.705757  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:17.708076  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.708414  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.708442  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.708581  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:17.708782  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:17.708925  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:17.709049  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:17.709191  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:28:17.709391  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:28:17.709409  624195 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 13:28:17.805513  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716211697.764627381
	
	I0520 13:28:17.805540  624195 fix.go:216] guest clock: 1716211697.764627381
	I0520 13:28:17.805550  624195 fix.go:229] Guest: 2024-05-20 13:28:17.764627381 +0000 UTC Remote: 2024-05-20 13:28:17.705742423 +0000 UTC m=+22.952607324 (delta=58.884958ms)
	I0520 13:28:17.805576  624195 fix.go:200] guest clock delta is within tolerance: 58.884958ms
	I0520 13:28:17.805587  624195 start.go:83] releasing machines lock for "ha-170194", held for 22.937322256s
	I0520 13:28:17.805614  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:17.805884  624195 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:28:17.808403  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.808724  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.808754  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.808867  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:17.809445  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:17.809654  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:17.809756  624195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 13:28:17.809792  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:17.809916  624195 ssh_runner.go:195] Run: cat /version.json
	I0520 13:28:17.809941  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:17.812301  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.812371  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.812658  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.812688  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.812712  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.812726  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.812799  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:17.812933  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:17.813020  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:17.813052  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:17.813172  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:17.813265  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:17.813346  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:28:17.813430  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	W0520 13:28:17.885671  624195 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 13:28:17.885744  624195 ssh_runner.go:195] Run: systemctl --version
	I0520 13:28:17.920845  624195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 13:28:18.083087  624195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 13:28:18.089011  624195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 13:28:18.089074  624195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 13:28:18.104478  624195 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 13:28:18.104502  624195 start.go:494] detecting cgroup driver to use...
	I0520 13:28:18.104569  624195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 13:28:18.119192  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 13:28:18.131993  624195 docker.go:217] disabling cri-docker service (if available) ...
	I0520 13:28:18.132040  624195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 13:28:18.144764  624195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 13:28:18.157011  624195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 13:28:18.262539  624195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 13:28:18.390638  624195 docker.go:233] disabling docker service ...
	I0520 13:28:18.390720  624195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 13:28:18.403852  624195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 13:28:18.416113  624195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 13:28:18.549600  624195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 13:28:18.661232  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 13:28:18.674749  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 13:28:18.692146  624195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 13:28:18.692204  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:28:18.702249  624195 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 13:28:18.702328  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:28:18.712386  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:28:18.722412  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:28:18.732343  624195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 13:28:18.742653  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:28:18.752679  624195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:28:18.768314  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:28:18.777984  624195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 13:28:18.786436  624195 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 13:28:18.786490  624195 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 13:28:18.798583  624195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 13:28:18.807592  624195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:28:18.916620  624195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 13:28:19.050058  624195 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 13:28:19.050157  624195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 13:28:19.054483  624195 start.go:562] Will wait 60s for crictl version
	I0520 13:28:19.054545  624195 ssh_runner.go:195] Run: which crictl
	I0520 13:28:19.057926  624195 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 13:28:19.099892  624195 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 13:28:19.099978  624195 ssh_runner.go:195] Run: crio --version
	I0520 13:28:19.125482  624195 ssh_runner.go:195] Run: crio --version
	I0520 13:28:19.159649  624195 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 13:28:19.161634  624195 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:28:19.164355  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:19.164819  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:19.164848  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:19.165120  624195 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 13:28:19.169051  624195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:28:19.181358  624195 kubeadm.go:877] updating cluster {Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 13:28:19.181503  624195 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:28:19.181554  624195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:28:19.211681  624195 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 13:28:19.211751  624195 ssh_runner.go:195] Run: which lz4
	I0520 13:28:19.215344  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0520 13:28:19.215446  624195 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 13:28:19.219251  624195 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 13:28:19.219283  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 13:28:20.426993  624195 crio.go:462] duration metric: took 1.211579486s to copy over tarball
	I0520 13:28:20.427099  624195 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 13:28:22.481630  624195 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.054498561s)
	I0520 13:28:22.481659  624195 crio.go:469] duration metric: took 2.054633756s to extract the tarball
	I0520 13:28:22.481674  624195 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 13:28:22.517651  624195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:28:22.560937  624195 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:28:22.560962  624195 cache_images.go:84] Images are preloaded, skipping loading
	I0520 13:28:22.560970  624195 kubeadm.go:928] updating node { 192.168.39.92 8443 v1.30.1 crio true true} ...
	I0520 13:28:22.561099  624195 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-170194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:28:22.561189  624195 ssh_runner.go:195] Run: crio config
	I0520 13:28:22.613106  624195 cni.go:84] Creating CNI manager for ""
	I0520 13:28:22.613128  624195 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 13:28:22.613145  624195 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 13:28:22.613167  624195 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-170194 NodeName:ha-170194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 13:28:22.613321  624195 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-170194"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 13:28:22.613346  624195 kube-vip.go:115] generating kube-vip config ...
	I0520 13:28:22.613388  624195 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 13:28:22.628339  624195 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 13:28:22.628449  624195 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0520 13:28:22.628504  624195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 13:28:22.637629  624195 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 13:28:22.637716  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0520 13:28:22.646391  624195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0520 13:28:22.661041  624195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:28:22.675870  624195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0520 13:28:22.690568  624195 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0520 13:28:22.705009  624195 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 13:28:22.708356  624195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:28:22.719020  624195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:28:22.844563  624195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:28:22.860778  624195 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194 for IP: 192.168.39.92
	I0520 13:28:22.860798  624195 certs.go:194] generating shared ca certs ...
	I0520 13:28:22.860815  624195 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:22.860993  624195 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 13:28:22.861032  624195 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 13:28:22.861041  624195 certs.go:256] generating profile certs ...
	I0520 13:28:22.861099  624195 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key
	I0520 13:28:22.861117  624195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.crt with IP's: []
	I0520 13:28:22.962878  624195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.crt ...
	I0520 13:28:22.962909  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.crt: {Name:mk48839fa6f1275bc62052afea07d44900deb930 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:22.963083  624195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key ...
	I0520 13:28:22.963094  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key: {Name:mk204d14d925f8a71a8af7296551fc6ce490a267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:22.963169  624195 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.82000ede
	I0520 13:28:22.963185  624195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.82000ede with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.92 192.168.39.254]
	I0520 13:28:23.110370  624195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.82000ede ...
	I0520 13:28:23.110405  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.82000ede: {Name:mkd54c1e251ab37cbe185c1a0846b1344783525e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:23.110573  624195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.82000ede ...
	I0520 13:28:23.110587  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.82000ede: {Name:mk0c4673a459951ad3c1fb8b6a2bac8448ff4296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:23.110657  624195 certs.go:381] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.82000ede -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt
	I0520 13:28:23.110727  624195 certs.go:385] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.82000ede -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key
	I0520 13:28:23.110777  624195 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key
	I0520 13:28:23.110791  624195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt with IP's: []
	I0520 13:28:23.167318  624195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt ...
	I0520 13:28:23.167348  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt: {Name:mkefa0155ad99bbe313405324e43f6da286534a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:23.167497  624195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key ...
	I0520 13:28:23.167515  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key: {Name:mkcbec10ece7167813a11fb62a95789f2f93bd0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:23.167581  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 13:28:23.167597  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 13:28:23.167607  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 13:28:23.167621  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 13:28:23.167631  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 13:28:23.167641  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 13:28:23.167652  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 13:28:23.167662  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 13:28:23.167713  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem (1338 bytes)
	W0520 13:28:23.167745  624195 certs.go:480] ignoring /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867_empty.pem, impossibly tiny 0 bytes
	I0520 13:28:23.167754  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 13:28:23.167773  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 13:28:23.167848  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:28:23.167881  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 13:28:23.167927  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:28:23.167954  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem -> /usr/share/ca-certificates/609867.pem
	I0520 13:28:23.167967  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /usr/share/ca-certificates/6098672.pem
	I0520 13:28:23.167979  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:28:23.168541  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:28:23.192031  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:28:23.213025  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:28:23.233925  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 13:28:23.254734  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 13:28:23.275509  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 13:28:23.296244  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:28:23.317011  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 13:28:23.338012  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem --> /usr/share/ca-certificates/609867.pem (1338 bytes)
	I0520 13:28:23.358915  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /usr/share/ca-certificates/6098672.pem (1708 bytes)
	I0520 13:28:23.379411  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:28:23.399572  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 13:28:23.414145  624195 ssh_runner.go:195] Run: openssl version
	I0520 13:28:23.419405  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/609867.pem && ln -fs /usr/share/ca-certificates/609867.pem /etc/ssl/certs/609867.pem"
	I0520 13:28:23.428999  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/609867.pem
	I0520 13:28:23.433116  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 13:28:23.433173  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/609867.pem
	I0520 13:28:23.438564  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/609867.pem /etc/ssl/certs/51391683.0"
	I0520 13:28:23.448795  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6098672.pem && ln -fs /usr/share/ca-certificates/6098672.pem /etc/ssl/certs/6098672.pem"
	I0520 13:28:23.458868  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6098672.pem
	I0520 13:28:23.462994  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 13:28:23.463056  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6098672.pem
	I0520 13:28:23.468359  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6098672.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:28:23.478069  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:28:23.487827  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:28:23.491839  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:28:23.491887  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:28:23.496898  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:28:23.506612  624195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:28:23.510142  624195 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 13:28:23.510203  624195 kubeadm.go:391] StartCluster: {Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:28:23.510285  624195 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 13:28:23.510321  624195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 13:28:23.549729  624195 cri.go:89] found id: ""
	I0520 13:28:23.549807  624195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 13:28:23.559071  624195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 13:28:23.567994  624195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 13:28:23.576778  624195 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 13:28:23.576803  624195 kubeadm.go:156] found existing configuration files:
	
	I0520 13:28:23.576844  624195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 13:28:23.585105  624195 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 13:28:23.585153  624195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 13:28:23.594059  624195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 13:28:23.602846  624195 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 13:28:23.602916  624195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 13:28:23.612117  624195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 13:28:23.622726  624195 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 13:28:23.622796  624195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 13:28:23.631890  624195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 13:28:23.641364  624195 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 13:28:23.641420  624195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 13:28:23.649728  624195 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 13:28:23.753410  624195 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 13:28:23.753487  624195 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 13:28:23.863616  624195 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 13:28:23.863738  624195 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 13:28:23.863835  624195 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 13:28:24.063090  624195 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 13:28:24.095838  624195 out.go:204]   - Generating certificates and keys ...
	I0520 13:28:24.095982  624195 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 13:28:24.096085  624195 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 13:28:24.348072  624195 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 13:28:24.447420  624195 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 13:28:24.658729  624195 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 13:28:24.905241  624195 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 13:28:25.030560  624195 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 13:28:25.030781  624195 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-170194 localhost] and IPs [192.168.39.92 127.0.0.1 ::1]
	I0520 13:28:25.112572  624195 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 13:28:25.112787  624195 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-170194 localhost] and IPs [192.168.39.92 127.0.0.1 ::1]
	I0520 13:28:25.315895  624195 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 13:28:25.634467  624195 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 13:28:26.078695  624195 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 13:28:26.078923  624195 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 13:28:26.243887  624195 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 13:28:26.352281  624195 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 13:28:26.614181  624195 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 13:28:26.838217  624195 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 13:28:26.926318  624195 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 13:28:26.926883  624195 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 13:28:26.929498  624195 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 13:28:26.989435  624195 out.go:204]   - Booting up control plane ...
	I0520 13:28:26.989613  624195 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 13:28:26.989718  624195 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 13:28:26.989808  624195 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 13:28:26.989943  624195 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 13:28:26.990050  624195 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 13:28:26.990089  624195 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 13:28:27.081215  624195 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 13:28:27.081390  624195 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 13:28:27.582299  624195 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.301224ms
	I0520 13:28:27.582438  624195 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 13:28:33.554295  624195 kubeadm.go:309] [api-check] The API server is healthy after 5.971599406s
	I0520 13:28:33.575332  624195 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 13:28:33.597301  624195 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 13:28:33.632955  624195 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 13:28:33.633208  624195 kubeadm.go:309] [mark-control-plane] Marking the node ha-170194 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 13:28:33.646407  624195 kubeadm.go:309] [bootstrap-token] Using token: xxbnz9.veyzbo9bfh7fya27
	I0520 13:28:33.648795  624195 out.go:204]   - Configuring RBAC rules ...
	I0520 13:28:33.648922  624195 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 13:28:33.658191  624195 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 13:28:33.674301  624195 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 13:28:33.678034  624195 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 13:28:33.681836  624195 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 13:28:33.685596  624195 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 13:28:33.962504  624195 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 13:28:34.414749  624195 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 13:28:34.961323  624195 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 13:28:34.962213  624195 kubeadm.go:309] 
	I0520 13:28:34.962298  624195 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 13:28:34.962312  624195 kubeadm.go:309] 
	I0520 13:28:34.962389  624195 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 13:28:34.962404  624195 kubeadm.go:309] 
	I0520 13:28:34.962447  624195 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 13:28:34.962517  624195 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 13:28:34.962592  624195 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 13:28:34.962610  624195 kubeadm.go:309] 
	I0520 13:28:34.962665  624195 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 13:28:34.962671  624195 kubeadm.go:309] 
	I0520 13:28:34.962710  624195 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 13:28:34.962717  624195 kubeadm.go:309] 
	I0520 13:28:34.962769  624195 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 13:28:34.962844  624195 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 13:28:34.962906  624195 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 13:28:34.962912  624195 kubeadm.go:309] 
	I0520 13:28:34.962988  624195 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 13:28:34.963056  624195 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 13:28:34.963062  624195 kubeadm.go:309] 
	I0520 13:28:34.963130  624195 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token xxbnz9.veyzbo9bfh7fya27 \
	I0520 13:28:34.963216  624195 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa \
	I0520 13:28:34.963258  624195 kubeadm.go:309] 	--control-plane 
	I0520 13:28:34.963287  624195 kubeadm.go:309] 
	I0520 13:28:34.963403  624195 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 13:28:34.963416  624195 kubeadm.go:309] 
	I0520 13:28:34.963534  624195 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token xxbnz9.veyzbo9bfh7fya27 \
	I0520 13:28:34.963648  624195 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa 
	I0520 13:28:34.964372  624195 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 13:28:34.964410  624195 cni.go:84] Creating CNI manager for ""
	I0520 13:28:34.964423  624195 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 13:28:34.966911  624195 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 13:28:34.969080  624195 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 13:28:34.974261  624195 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 13:28:34.974279  624195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 13:28:34.992012  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 13:28:35.316278  624195 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 13:28:35.316385  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:35.316427  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-170194 minikube.k8s.io/updated_at=2024_05_20T13_28_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45 minikube.k8s.io/name=ha-170194 minikube.k8s.io/primary=true
	I0520 13:28:35.368776  624195 ops.go:34] apiserver oom_adj: -16
	I0520 13:28:35.524586  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:36.025341  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:36.524735  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:37.025186  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:37.524757  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:38.024640  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:38.524933  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:39.025039  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:39.524713  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:40.024950  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:40.524909  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:41.024670  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:41.524965  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:42.025369  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:42.524728  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:43.025393  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:43.524666  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:44.025387  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:44.524749  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:45.025662  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:45.525640  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:46.025012  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:46.525060  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:47.025660  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:47.175131  624195 kubeadm.go:1107] duration metric: took 11.858816146s to wait for elevateKubeSystemPrivileges
	W0520 13:28:47.175195  624195 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 13:28:47.175209  624195 kubeadm.go:393] duration metric: took 23.665011428s to StartCluster
	I0520 13:28:47.175236  624195 settings.go:142] acquiring lock: {Name:mkfd2b5ade573ba8a1562334a87bc71c9f86de47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:47.175354  624195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 13:28:47.176264  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/kubeconfig: {Name:mkf2ddc28a03db07b0a9823212c8450f3ae4cfa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:47.176545  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 13:28:47.176556  624195 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:28:47.176581  624195 start.go:240] waiting for startup goroutines ...
	I0520 13:28:47.176597  624195 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 13:28:47.176663  624195 addons.go:69] Setting storage-provisioner=true in profile "ha-170194"
	I0520 13:28:47.176683  624195 addons.go:69] Setting default-storageclass=true in profile "ha-170194"
	I0520 13:28:47.176742  624195 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-170194"
	I0520 13:28:47.176802  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:28:47.176700  624195 addons.go:234] Setting addon storage-provisioner=true in "ha-170194"
	I0520 13:28:47.176858  624195 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:28:47.177195  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:28:47.177227  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:28:47.177270  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:28:47.177310  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:28:47.193310  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33401
	I0520 13:28:47.193313  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43811
	I0520 13:28:47.193905  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:28:47.193914  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:28:47.194463  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:28:47.194485  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:28:47.194613  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:28:47.194638  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:28:47.194861  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:28:47.195042  624195 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:28:47.195083  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:28:47.195654  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:28:47.195686  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:28:47.197520  624195 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 13:28:47.197909  624195 kapi.go:59] client config for ha-170194: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.crt", KeyFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key", CAFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 13:28:47.198505  624195 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 13:28:47.198812  624195 addons.go:234] Setting addon default-storageclass=true in "ha-170194"
	I0520 13:28:47.198865  624195 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:28:47.199295  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:28:47.199352  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:28:47.211688  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33655
	I0520 13:28:47.212323  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:28:47.212951  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:28:47.212987  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:28:47.213361  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:28:47.213578  624195 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:28:47.214825  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45213
	I0520 13:28:47.215350  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:28:47.215904  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:28:47.215921  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:28:47.215971  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:47.219043  624195 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 13:28:47.216311  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:28:47.221534  624195 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 13:28:47.219767  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:28:47.221577  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:28:47.221601  624195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 13:28:47.221626  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:47.224993  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:47.225431  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:47.225469  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:47.225744  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:47.226002  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:47.226162  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:47.226304  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:28:47.238287  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46521
	I0520 13:28:47.238818  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:28:47.239375  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:28:47.239399  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:28:47.239826  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:28:47.240058  624195 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:28:47.241890  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:47.242136  624195 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 13:28:47.242151  624195 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 13:28:47.242165  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:47.245241  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:47.245728  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:47.245755  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:47.245953  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:47.246159  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:47.246321  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:47.246460  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:28:47.376266  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 13:28:47.391766  624195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 13:28:47.446364  624195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 13:28:48.072427  624195 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0520 13:28:48.072521  624195 main.go:141] libmachine: Making call to close driver server
	I0520 13:28:48.072546  624195 main.go:141] libmachine: (ha-170194) Calling .Close
	I0520 13:28:48.072873  624195 main.go:141] libmachine: Successfully made call to close driver server
	I0520 13:28:48.072890  624195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 13:28:48.072900  624195 main.go:141] libmachine: Making call to close driver server
	I0520 13:28:48.072907  624195 main.go:141] libmachine: (ha-170194) Calling .Close
	I0520 13:28:48.073149  624195 main.go:141] libmachine: Successfully made call to close driver server
	I0520 13:28:48.073163  624195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 13:28:48.073164  624195 main.go:141] libmachine: (ha-170194) DBG | Closing plugin on server side
	I0520 13:28:48.073339  624195 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0520 13:28:48.073353  624195 round_trippers.go:469] Request Headers:
	I0520 13:28:48.073364  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:28:48.073368  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:28:48.084371  624195 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 13:28:48.084940  624195 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 13:28:48.084972  624195 round_trippers.go:469] Request Headers:
	I0520 13:28:48.084981  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:28:48.084985  624195 round_trippers.go:473]     Content-Type: application/json
	I0520 13:28:48.084989  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:28:48.087741  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:28:48.087930  624195 main.go:141] libmachine: Making call to close driver server
	I0520 13:28:48.087949  624195 main.go:141] libmachine: (ha-170194) Calling .Close
	I0520 13:28:48.088254  624195 main.go:141] libmachine: Successfully made call to close driver server
	I0520 13:28:48.088275  624195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 13:28:48.088280  624195 main.go:141] libmachine: (ha-170194) DBG | Closing plugin on server side
	I0520 13:28:48.224085  624195 main.go:141] libmachine: Making call to close driver server
	I0520 13:28:48.224124  624195 main.go:141] libmachine: (ha-170194) Calling .Close
	I0520 13:28:48.224442  624195 main.go:141] libmachine: Successfully made call to close driver server
	I0520 13:28:48.224462  624195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 13:28:48.224471  624195 main.go:141] libmachine: Making call to close driver server
	I0520 13:28:48.224478  624195 main.go:141] libmachine: (ha-170194) Calling .Close
	I0520 13:28:48.224753  624195 main.go:141] libmachine: Successfully made call to close driver server
	I0520 13:28:48.224768  624195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 13:28:48.227624  624195 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0520 13:28:48.230004  624195 addons.go:505] duration metric: took 1.053400697s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0520 13:28:48.230056  624195 start.go:245] waiting for cluster config update ...
	I0520 13:28:48.230074  624195 start.go:254] writing updated cluster config ...
	I0520 13:28:48.232562  624195 out.go:177] 
	I0520 13:28:48.234985  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:28:48.235103  624195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:28:48.238491  624195 out.go:177] * Starting "ha-170194-m02" control-plane node in "ha-170194" cluster
	I0520 13:28:48.241389  624195 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:28:48.241422  624195 cache.go:56] Caching tarball of preloaded images
	I0520 13:28:48.241527  624195 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:28:48.241538  624195 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 13:28:48.241611  624195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:28:48.241786  624195 start.go:360] acquireMachinesLock for ha-170194-m02: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:28:48.241834  624195 start.go:364] duration metric: took 27.71µs to acquireMachinesLock for "ha-170194-m02"
	I0520 13:28:48.241853  624195 start.go:93] Provisioning new machine with config: &{Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:28:48.241937  624195 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0520 13:28:48.245208  624195 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 13:28:48.245349  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:28:48.245386  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:28:48.260813  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34619
	I0520 13:28:48.261287  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:28:48.261782  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:28:48.261811  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:28:48.262150  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:28:48.262362  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetMachineName
	I0520 13:28:48.262523  624195 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:28:48.262656  624195 start.go:159] libmachine.API.Create for "ha-170194" (driver="kvm2")
	I0520 13:28:48.262678  624195 client.go:168] LocalClient.Create starting
	I0520 13:28:48.262709  624195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem
	I0520 13:28:48.262742  624195 main.go:141] libmachine: Decoding PEM data...
	I0520 13:28:48.262756  624195 main.go:141] libmachine: Parsing certificate...
	I0520 13:28:48.262815  624195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem
	I0520 13:28:48.262832  624195 main.go:141] libmachine: Decoding PEM data...
	I0520 13:28:48.262844  624195 main.go:141] libmachine: Parsing certificate...
	I0520 13:28:48.262863  624195 main.go:141] libmachine: Running pre-create checks...
	I0520 13:28:48.262872  624195 main.go:141] libmachine: (ha-170194-m02) Calling .PreCreateCheck
	I0520 13:28:48.263019  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetConfigRaw
	I0520 13:28:48.263432  624195 main.go:141] libmachine: Creating machine...
	I0520 13:28:48.263445  624195 main.go:141] libmachine: (ha-170194-m02) Calling .Create
	I0520 13:28:48.263581  624195 main.go:141] libmachine: (ha-170194-m02) Creating KVM machine...
	I0520 13:28:48.264696  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found existing default KVM network
	I0520 13:28:48.264847  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found existing private KVM network mk-ha-170194
	I0520 13:28:48.264987  624195 main.go:141] libmachine: (ha-170194-m02) Setting up store path in /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02 ...
	I0520 13:28:48.265015  624195 main.go:141] libmachine: (ha-170194-m02) Building disk image from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 13:28:48.265084  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:48.264932  624571 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:28:48.265150  624195 main.go:141] libmachine: (ha-170194-m02) Downloading /home/jenkins/minikube-integration/18929-602525/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 13:28:48.520565  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:48.520428  624571 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa...
	I0520 13:28:48.688844  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:48.688694  624571 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/ha-170194-m02.rawdisk...
	I0520 13:28:48.688877  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Writing magic tar header
	I0520 13:28:48.688886  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Writing SSH key tar header
	I0520 13:28:48.688894  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:48.688814  624571 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02 ...
	I0520 13:28:48.688910  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02
	I0520 13:28:48.689001  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines
	I0520 13:28:48.689025  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:28:48.689039  624195 main.go:141] libmachine: (ha-170194-m02) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02 (perms=drwx------)
	I0520 13:28:48.689059  624195 main.go:141] libmachine: (ha-170194-m02) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines (perms=drwxr-xr-x)
	I0520 13:28:48.689074  624195 main.go:141] libmachine: (ha-170194-m02) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube (perms=drwxr-xr-x)
	I0520 13:28:48.689091  624195 main.go:141] libmachine: (ha-170194-m02) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525 (perms=drwxrwxr-x)
	I0520 13:28:48.689105  624195 main.go:141] libmachine: (ha-170194-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 13:28:48.689121  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525
	I0520 13:28:48.689135  624195 main.go:141] libmachine: (ha-170194-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 13:28:48.689149  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 13:28:48.689164  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Checking permissions on dir: /home/jenkins
	I0520 13:28:48.689184  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Checking permissions on dir: /home
	I0520 13:28:48.689204  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Skipping /home - not owner
	I0520 13:28:48.689222  624195 main.go:141] libmachine: (ha-170194-m02) Creating domain...
	I0520 13:28:48.690305  624195 main.go:141] libmachine: (ha-170194-m02) define libvirt domain using xml: 
	I0520 13:28:48.690327  624195 main.go:141] libmachine: (ha-170194-m02) <domain type='kvm'>
	I0520 13:28:48.690339  624195 main.go:141] libmachine: (ha-170194-m02)   <name>ha-170194-m02</name>
	I0520 13:28:48.690350  624195 main.go:141] libmachine: (ha-170194-m02)   <memory unit='MiB'>2200</memory>
	I0520 13:28:48.690356  624195 main.go:141] libmachine: (ha-170194-m02)   <vcpu>2</vcpu>
	I0520 13:28:48.690362  624195 main.go:141] libmachine: (ha-170194-m02)   <features>
	I0520 13:28:48.690370  624195 main.go:141] libmachine: (ha-170194-m02)     <acpi/>
	I0520 13:28:48.690376  624195 main.go:141] libmachine: (ha-170194-m02)     <apic/>
	I0520 13:28:48.690384  624195 main.go:141] libmachine: (ha-170194-m02)     <pae/>
	I0520 13:28:48.690391  624195 main.go:141] libmachine: (ha-170194-m02)     
	I0520 13:28:48.690399  624195 main.go:141] libmachine: (ha-170194-m02)   </features>
	I0520 13:28:48.690406  624195 main.go:141] libmachine: (ha-170194-m02)   <cpu mode='host-passthrough'>
	I0520 13:28:48.690415  624195 main.go:141] libmachine: (ha-170194-m02)   
	I0520 13:28:48.690419  624195 main.go:141] libmachine: (ha-170194-m02)   </cpu>
	I0520 13:28:48.690425  624195 main.go:141] libmachine: (ha-170194-m02)   <os>
	I0520 13:28:48.690429  624195 main.go:141] libmachine: (ha-170194-m02)     <type>hvm</type>
	I0520 13:28:48.690435  624195 main.go:141] libmachine: (ha-170194-m02)     <boot dev='cdrom'/>
	I0520 13:28:48.690441  624195 main.go:141] libmachine: (ha-170194-m02)     <boot dev='hd'/>
	I0520 13:28:48.690470  624195 main.go:141] libmachine: (ha-170194-m02)     <bootmenu enable='no'/>
	I0520 13:28:48.690490  624195 main.go:141] libmachine: (ha-170194-m02)   </os>
	I0520 13:28:48.690497  624195 main.go:141] libmachine: (ha-170194-m02)   <devices>
	I0520 13:28:48.690507  624195 main.go:141] libmachine: (ha-170194-m02)     <disk type='file' device='cdrom'>
	I0520 13:28:48.690546  624195 main.go:141] libmachine: (ha-170194-m02)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/boot2docker.iso'/>
	I0520 13:28:48.690575  624195 main.go:141] libmachine: (ha-170194-m02)       <target dev='hdc' bus='scsi'/>
	I0520 13:28:48.690589  624195 main.go:141] libmachine: (ha-170194-m02)       <readonly/>
	I0520 13:28:48.690601  624195 main.go:141] libmachine: (ha-170194-m02)     </disk>
	I0520 13:28:48.690613  624195 main.go:141] libmachine: (ha-170194-m02)     <disk type='file' device='disk'>
	I0520 13:28:48.690626  624195 main.go:141] libmachine: (ha-170194-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 13:28:48.690642  624195 main.go:141] libmachine: (ha-170194-m02)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/ha-170194-m02.rawdisk'/>
	I0520 13:28:48.690658  624195 main.go:141] libmachine: (ha-170194-m02)       <target dev='hda' bus='virtio'/>
	I0520 13:28:48.690669  624195 main.go:141] libmachine: (ha-170194-m02)     </disk>
	I0520 13:28:48.690684  624195 main.go:141] libmachine: (ha-170194-m02)     <interface type='network'>
	I0520 13:28:48.690697  624195 main.go:141] libmachine: (ha-170194-m02)       <source network='mk-ha-170194'/>
	I0520 13:28:48.690708  624195 main.go:141] libmachine: (ha-170194-m02)       <model type='virtio'/>
	I0520 13:28:48.690717  624195 main.go:141] libmachine: (ha-170194-m02)     </interface>
	I0520 13:28:48.690732  624195 main.go:141] libmachine: (ha-170194-m02)     <interface type='network'>
	I0520 13:28:48.690747  624195 main.go:141] libmachine: (ha-170194-m02)       <source network='default'/>
	I0520 13:28:48.690756  624195 main.go:141] libmachine: (ha-170194-m02)       <model type='virtio'/>
	I0520 13:28:48.690780  624195 main.go:141] libmachine: (ha-170194-m02)     </interface>
	I0520 13:28:48.690807  624195 main.go:141] libmachine: (ha-170194-m02)     <serial type='pty'>
	I0520 13:28:48.690829  624195 main.go:141] libmachine: (ha-170194-m02)       <target port='0'/>
	I0520 13:28:48.690846  624195 main.go:141] libmachine: (ha-170194-m02)     </serial>
	I0520 13:28:48.690864  624195 main.go:141] libmachine: (ha-170194-m02)     <console type='pty'>
	I0520 13:28:48.690883  624195 main.go:141] libmachine: (ha-170194-m02)       <target type='serial' port='0'/>
	I0520 13:28:48.690895  624195 main.go:141] libmachine: (ha-170194-m02)     </console>
	I0520 13:28:48.690903  624195 main.go:141] libmachine: (ha-170194-m02)     <rng model='virtio'>
	I0520 13:28:48.690913  624195 main.go:141] libmachine: (ha-170194-m02)       <backend model='random'>/dev/random</backend>
	I0520 13:28:48.690920  624195 main.go:141] libmachine: (ha-170194-m02)     </rng>
	I0520 13:28:48.690927  624195 main.go:141] libmachine: (ha-170194-m02)     
	I0520 13:28:48.690933  624195 main.go:141] libmachine: (ha-170194-m02)     
	I0520 13:28:48.690941  624195 main.go:141] libmachine: (ha-170194-m02)   </devices>
	I0520 13:28:48.690948  624195 main.go:141] libmachine: (ha-170194-m02) </domain>
	I0520 13:28:48.690964  624195 main.go:141] libmachine: (ha-170194-m02) 
	I0520 13:28:48.698862  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:30:22:2e in network default
	I0520 13:28:48.701066  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:48.701087  624195 main.go:141] libmachine: (ha-170194-m02) Ensuring networks are active...
	I0520 13:28:48.702076  624195 main.go:141] libmachine: (ha-170194-m02) Ensuring network default is active
	I0520 13:28:48.702469  624195 main.go:141] libmachine: (ha-170194-m02) Ensuring network mk-ha-170194 is active
	I0520 13:28:48.702848  624195 main.go:141] libmachine: (ha-170194-m02) Getting domain xml...
	I0520 13:28:48.703648  624195 main.go:141] libmachine: (ha-170194-m02) Creating domain...
	I0520 13:28:49.949646  624195 main.go:141] libmachine: (ha-170194-m02) Waiting to get IP...
	I0520 13:28:49.950506  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:49.950886  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:49.950952  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:49.950868  624571 retry.go:31] will retry after 260.432301ms: waiting for machine to come up
	I0520 13:28:50.213512  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:50.214024  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:50.214061  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:50.213958  624571 retry.go:31] will retry after 316.191611ms: waiting for machine to come up
	I0520 13:28:50.531590  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:50.532047  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:50.532079  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:50.531993  624571 retry.go:31] will retry after 469.182705ms: waiting for machine to come up
	I0520 13:28:51.002473  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:51.002920  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:51.002953  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:51.002870  624571 retry.go:31] will retry after 532.236669ms: waiting for machine to come up
	I0520 13:28:51.537274  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:51.537911  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:51.537940  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:51.537866  624571 retry.go:31] will retry after 469.464444ms: waiting for machine to come up
	I0520 13:28:52.008531  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:52.008968  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:52.008999  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:52.008925  624571 retry.go:31] will retry after 658.375912ms: waiting for machine to come up
	I0520 13:28:52.668762  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:52.669226  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:52.669269  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:52.669170  624571 retry.go:31] will retry after 1.046807109s: waiting for machine to come up
	I0520 13:28:53.718231  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:53.718626  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:53.718660  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:53.718578  624571 retry.go:31] will retry after 1.300389906s: waiting for machine to come up
	I0520 13:28:55.021098  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:55.021668  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:55.021697  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:55.021614  624571 retry.go:31] will retry after 1.666445023s: waiting for machine to come up
	I0520 13:28:56.690683  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:56.691224  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:56.691248  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:56.691175  624571 retry.go:31] will retry after 1.6710471s: waiting for machine to come up
	I0520 13:28:58.364546  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:58.365756  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:58.365794  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:58.365607  624571 retry.go:31] will retry after 1.861117457s: waiting for machine to come up
	I0520 13:29:00.229815  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:00.230274  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:29:00.230302  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:29:00.230229  624571 retry.go:31] will retry after 2.215945961s: waiting for machine to come up
	I0520 13:29:02.448575  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:02.448999  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:29:02.449028  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:29:02.448936  624571 retry.go:31] will retry after 3.796039161s: waiting for machine to come up
	I0520 13:29:06.247888  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:06.248421  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:29:06.248454  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:29:06.248359  624571 retry.go:31] will retry after 3.504798848s: waiting for machine to come up
	I0520 13:29:09.755718  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:09.756305  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has current primary IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:09.756326  624195 main.go:141] libmachine: (ha-170194-m02) Found IP for machine: 192.168.39.155
	I0520 13:29:09.756337  624195 main.go:141] libmachine: (ha-170194-m02) Reserving static IP address...
	I0520 13:29:09.756702  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find host DHCP lease matching {name: "ha-170194-m02", mac: "52:54:00:3b:bd:91", ip: "192.168.39.155"} in network mk-ha-170194
	I0520 13:29:09.837735  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Getting to WaitForSSH function...
	I0520 13:29:09.837769  624195 main.go:141] libmachine: (ha-170194-m02) Reserved static IP address: 192.168.39.155
	I0520 13:29:09.837790  624195 main.go:141] libmachine: (ha-170194-m02) Waiting for SSH to be available...
	I0520 13:29:09.840897  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:09.841394  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:09.841425  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:09.841636  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Using SSH client type: external
	I0520 13:29:09.841662  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa (-rw-------)
	I0520 13:29:09.841696  624195 main.go:141] libmachine: (ha-170194-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.155 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 13:29:09.841709  624195 main.go:141] libmachine: (ha-170194-m02) DBG | About to run SSH command:
	I0520 13:29:09.841721  624195 main.go:141] libmachine: (ha-170194-m02) DBG | exit 0
	I0520 13:29:09.965601  624195 main.go:141] libmachine: (ha-170194-m02) DBG | SSH cmd err, output: <nil>: 
	I0520 13:29:09.965913  624195 main.go:141] libmachine: (ha-170194-m02) KVM machine creation complete!
	I0520 13:29:09.966212  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetConfigRaw
	I0520 13:29:09.966833  624195 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:29:09.967078  624195 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:29:09.967296  624195 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 13:29:09.967314  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetState
	I0520 13:29:09.968735  624195 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 13:29:09.968754  624195 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 13:29:09.968761  624195 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 13:29:09.968769  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:09.971729  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:09.972179  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:09.972217  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:09.972452  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:09.972642  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:09.972850  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:09.973010  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:09.973225  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:09.973538  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0520 13:29:09.973556  624195 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 13:29:10.072679  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:29:10.072712  624195 main.go:141] libmachine: Detecting the provisioner...
	I0520 13:29:10.072724  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:10.075775  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.076221  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:10.076250  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.076477  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:10.076738  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.076901  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.077051  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:10.077207  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:10.077401  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0520 13:29:10.077413  624195 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 13:29:10.177956  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 13:29:10.178073  624195 main.go:141] libmachine: found compatible host: buildroot
	I0520 13:29:10.178083  624195 main.go:141] libmachine: Provisioning with buildroot...
	I0520 13:29:10.178091  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetMachineName
	I0520 13:29:10.178395  624195 buildroot.go:166] provisioning hostname "ha-170194-m02"
	I0520 13:29:10.178433  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetMachineName
	I0520 13:29:10.178702  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:10.181773  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.182140  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:10.182174  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.182345  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:10.182574  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.182736  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.182904  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:10.183077  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:10.183262  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0520 13:29:10.183288  624195 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-170194-m02 && echo "ha-170194-m02" | sudo tee /etc/hostname
	I0520 13:29:10.296106  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-170194-m02
	
	I0520 13:29:10.296139  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:10.299063  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.299448  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:10.299472  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.299639  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:10.299875  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.300053  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.300212  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:10.300350  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:10.300553  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0520 13:29:10.300577  624195 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-170194-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-170194-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-170194-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 13:29:10.405448  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:29:10.405492  624195 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 13:29:10.405509  624195 buildroot.go:174] setting up certificates
	I0520 13:29:10.405519  624195 provision.go:84] configureAuth start
	I0520 13:29:10.405529  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetMachineName
	I0520 13:29:10.405831  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetIP
	I0520 13:29:10.408379  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.408720  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:10.408747  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.408876  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:10.411430  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.411759  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:10.411790  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.411901  624195 provision.go:143] copyHostCerts
	I0520 13:29:10.411938  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:29:10.411974  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 13:29:10.411984  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:29:10.412057  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 13:29:10.412171  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:29:10.412197  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 13:29:10.412206  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:29:10.412247  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 13:29:10.412313  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:29:10.412336  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 13:29:10.412342  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:29:10.412376  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 13:29:10.412442  624195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.ha-170194-m02 san=[127.0.0.1 192.168.39.155 ha-170194-m02 localhost minikube]
	I0520 13:29:10.629236  624195 provision.go:177] copyRemoteCerts
	I0520 13:29:10.629318  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 13:29:10.629350  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:10.631891  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.632207  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:10.632244  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.632401  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:10.632626  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.632795  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:10.632921  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	I0520 13:29:10.711236  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 13:29:10.711305  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 13:29:10.738073  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 13:29:10.738147  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 13:29:10.763816  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 13:29:10.763902  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 13:29:10.787048  624195 provision.go:87] duration metric: took 381.511669ms to configureAuth
	I0520 13:29:10.787090  624195 buildroot.go:189] setting minikube options for container-runtime
	I0520 13:29:10.787327  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:29:10.787453  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:10.790246  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.790624  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:10.790656  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.790829  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:10.791053  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.791201  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.791319  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:10.791479  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:10.791733  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0520 13:29:10.791759  624195 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 13:29:11.046719  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 13:29:11.046758  624195 main.go:141] libmachine: Checking connection to Docker...
	I0520 13:29:11.046771  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetURL
	I0520 13:29:11.048372  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Using libvirt version 6000000
	I0520 13:29:11.051077  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.051434  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:11.051469  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.051704  624195 main.go:141] libmachine: Docker is up and running!
	I0520 13:29:11.051721  624195 main.go:141] libmachine: Reticulating splines...
	I0520 13:29:11.051728  624195 client.go:171] duration metric: took 22.789040995s to LocalClient.Create
	I0520 13:29:11.051755  624195 start.go:167] duration metric: took 22.789100264s to libmachine.API.Create "ha-170194"
	I0520 13:29:11.051764  624195 start.go:293] postStartSetup for "ha-170194-m02" (driver="kvm2")
	I0520 13:29:11.051774  624195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 13:29:11.051791  624195 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:29:11.052036  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 13:29:11.052069  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:11.054471  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.054862  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:11.054887  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.055044  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:11.055243  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:11.055422  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:11.055595  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	I0520 13:29:11.136114  624195 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 13:29:11.140174  624195 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 13:29:11.140213  624195 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 13:29:11.140301  624195 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 13:29:11.140371  624195 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 13:29:11.140383  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /etc/ssl/certs/6098672.pem
	I0520 13:29:11.140461  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 13:29:11.149831  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:29:11.171949  624195 start.go:296] duration metric: took 120.169054ms for postStartSetup
	I0520 13:29:11.172005  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetConfigRaw
	I0520 13:29:11.172773  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetIP
	I0520 13:29:11.175414  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.175819  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:11.175852  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.176071  624195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:29:11.176325  624195 start.go:128] duration metric: took 22.934372346s to createHost
	I0520 13:29:11.176357  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:11.178710  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.179119  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:11.179154  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.179317  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:11.179558  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:11.179729  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:11.179903  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:11.180098  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:11.180265  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0520 13:29:11.180275  624195 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 13:29:11.277928  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716211751.253075568
	
	I0520 13:29:11.277952  624195 fix.go:216] guest clock: 1716211751.253075568
	I0520 13:29:11.277960  624195 fix.go:229] Guest: 2024-05-20 13:29:11.253075568 +0000 UTC Remote: 2024-05-20 13:29:11.176341982 +0000 UTC m=+76.423206883 (delta=76.733586ms)
	I0520 13:29:11.277976  624195 fix.go:200] guest clock delta is within tolerance: 76.733586ms
	I0520 13:29:11.277980  624195 start.go:83] releasing machines lock for "ha-170194-m02", held for 23.036137695s
	I0520 13:29:11.278004  624195 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:29:11.278289  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetIP
	I0520 13:29:11.280962  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.281421  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:11.281445  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.284797  624195 out.go:177] * Found network options:
	I0520 13:29:11.287145  624195 out.go:177]   - NO_PROXY=192.168.39.92
	W0520 13:29:11.289241  624195 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 13:29:11.289291  624195 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:29:11.289930  624195 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:29:11.290129  624195 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:29:11.290238  624195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 13:29:11.290292  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	W0520 13:29:11.290364  624195 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 13:29:11.290445  624195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 13:29:11.290464  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:11.293198  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.293364  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.293607  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:11.293636  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.293736  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:11.293755  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:11.293781  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.293920  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:11.293931  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:11.294120  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:11.294154  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:11.294313  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:11.294304  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	I0520 13:29:11.294467  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	I0520 13:29:11.528028  624195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 13:29:11.534156  624195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 13:29:11.534243  624195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 13:29:11.550155  624195 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 13:29:11.550182  624195 start.go:494] detecting cgroup driver to use...
	I0520 13:29:11.550269  624195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 13:29:11.566853  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 13:29:11.579779  624195 docker.go:217] disabling cri-docker service (if available) ...
	I0520 13:29:11.579853  624195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 13:29:11.593518  624195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 13:29:11.607644  624195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 13:29:11.729618  624195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 13:29:11.883567  624195 docker.go:233] disabling docker service ...
	I0520 13:29:11.883664  624195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 13:29:11.897860  624195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 13:29:11.911395  624195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 13:29:12.036291  624195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 13:29:12.155265  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 13:29:12.169239  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 13:29:12.187705  624195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 13:29:12.187768  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:12.197624  624195 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 13:29:12.197739  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:12.207577  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:12.217206  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:12.227532  624195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 13:29:12.237505  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:12.247577  624195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:12.264555  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:12.275960  624195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 13:29:12.285127  624195 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 13:29:12.285192  624195 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 13:29:12.299122  624195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 13:29:12.309316  624195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:29:12.438337  624195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 13:29:12.601443  624195 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 13:29:12.601522  624195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 13:29:12.606203  624195 start.go:562] Will wait 60s for crictl version
	I0520 13:29:12.606294  624195 ssh_runner.go:195] Run: which crictl
	I0520 13:29:12.609877  624195 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 13:29:12.646713  624195 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 13:29:12.646819  624195 ssh_runner.go:195] Run: crio --version
	I0520 13:29:12.672438  624195 ssh_runner.go:195] Run: crio --version
	I0520 13:29:12.700788  624195 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 13:29:12.703097  624195 out.go:177]   - env NO_PROXY=192.168.39.92
	I0520 13:29:12.705052  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetIP
	I0520 13:29:12.707544  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:12.707858  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:12.707886  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:12.708132  624195 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 13:29:12.712686  624195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:29:12.724807  624195 mustload.go:65] Loading cluster: ha-170194
	I0520 13:29:12.725080  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:29:12.725514  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:29:12.725551  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:29:12.740541  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35499
	I0520 13:29:12.741019  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:29:12.741564  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:29:12.741586  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:29:12.741966  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:29:12.742203  624195 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:29:12.743919  624195 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:29:12.744205  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:29:12.744245  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:29:12.759438  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33719
	I0520 13:29:12.759829  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:29:12.760261  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:29:12.760286  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:29:12.760610  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:29:12.760795  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:29:12.760964  624195 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194 for IP: 192.168.39.155
	I0520 13:29:12.760976  624195 certs.go:194] generating shared ca certs ...
	I0520 13:29:12.760988  624195 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:29:12.761132  624195 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 13:29:12.761173  624195 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 13:29:12.761183  624195 certs.go:256] generating profile certs ...
	I0520 13:29:12.761288  624195 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key
	I0520 13:29:12.761319  624195 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.109262c8
	I0520 13:29:12.761335  624195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.109262c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.92 192.168.39.155 192.168.39.254]
	I0520 13:29:13.038501  624195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.109262c8 ...
	I0520 13:29:13.038539  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.109262c8: {Name:mkdf5eaf058ef04410571d3595f24432d4e719c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:29:13.038742  624195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.109262c8 ...
	I0520 13:29:13.038764  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.109262c8: {Name:mkc7f28c6cc13ab984446cb2344b9f6ccaeae860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:29:13.038864  624195 certs.go:381] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.109262c8 -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt
	I0520 13:29:13.039011  624195 certs.go:385] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.109262c8 -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key
	I0520 13:29:13.039154  624195 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key
	I0520 13:29:13.039171  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 13:29:13.039185  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 13:29:13.039199  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 13:29:13.039214  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 13:29:13.039226  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 13:29:13.039240  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 13:29:13.039253  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 13:29:13.039266  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 13:29:13.039314  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem (1338 bytes)
	W0520 13:29:13.039342  624195 certs.go:480] ignoring /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867_empty.pem, impossibly tiny 0 bytes
	I0520 13:29:13.039352  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 13:29:13.039374  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 13:29:13.039396  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:29:13.039417  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 13:29:13.039452  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:29:13.039477  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:29:13.039491  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem -> /usr/share/ca-certificates/609867.pem
	I0520 13:29:13.039503  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /usr/share/ca-certificates/6098672.pem
	I0520 13:29:13.039539  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:29:13.042885  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:29:13.043259  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:29:13.043295  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:29:13.043497  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:29:13.043747  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:29:13.043928  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:29:13.044066  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:29:13.113643  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0520 13:29:13.118570  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0520 13:29:13.129028  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0520 13:29:13.133684  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0520 13:29:13.145361  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0520 13:29:13.149303  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0520 13:29:13.159094  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0520 13:29:13.162868  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0520 13:29:13.174323  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0520 13:29:13.178516  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0520 13:29:13.190302  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0520 13:29:13.194275  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0520 13:29:13.204637  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:29:13.229934  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:29:13.252924  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:29:13.276381  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 13:29:13.298860  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0520 13:29:13.321647  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 13:29:13.344077  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:29:13.366579  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 13:29:13.388680  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:29:13.411437  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem --> /usr/share/ca-certificates/609867.pem (1338 bytes)
	I0520 13:29:13.434477  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /usr/share/ca-certificates/6098672.pem (1708 bytes)
	I0520 13:29:13.457435  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0520 13:29:13.473705  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0520 13:29:13.489456  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0520 13:29:13.505008  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0520 13:29:13.520979  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0520 13:29:13.537111  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0520 13:29:13.553088  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0520 13:29:13.568524  624195 ssh_runner.go:195] Run: openssl version
	I0520 13:29:13.573813  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:29:13.583657  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:29:13.587625  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:29:13.587682  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:29:13.593610  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:29:13.604353  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/609867.pem && ln -fs /usr/share/ca-certificates/609867.pem /etc/ssl/certs/609867.pem"
	I0520 13:29:13.614642  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/609867.pem
	I0520 13:29:13.619003  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 13:29:13.619076  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/609867.pem
	I0520 13:29:13.624541  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/609867.pem /etc/ssl/certs/51391683.0"
	I0520 13:29:13.635084  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6098672.pem && ln -fs /usr/share/ca-certificates/6098672.pem /etc/ssl/certs/6098672.pem"
	I0520 13:29:13.645804  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6098672.pem
	I0520 13:29:13.650072  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 13:29:13.650128  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6098672.pem
	I0520 13:29:13.655544  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6098672.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:29:13.667113  624195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:29:13.670999  624195 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 13:29:13.671057  624195 kubeadm.go:928] updating node {m02 192.168.39.155 8443 v1.30.1 crio true true} ...
	I0520 13:29:13.671171  624195 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-170194-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.155
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:29:13.671203  624195 kube-vip.go:115] generating kube-vip config ...
	I0520 13:29:13.671235  624195 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 13:29:13.689752  624195 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 13:29:13.689823  624195 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 13:29:13.689876  624195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 13:29:13.700970  624195 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 13:29:13.701043  624195 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 13:29:13.712823  624195 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0520 13:29:13.712860  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 13:29:13.712937  624195 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 13:29:13.712953  624195 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0520 13:29:13.713018  624195 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0520 13:29:13.717407  624195 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 13:29:13.717438  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 13:29:19.375978  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 13:29:19.376066  624195 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 13:29:19.380820  624195 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 13:29:19.380859  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 13:29:24.665771  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:29:24.680483  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 13:29:24.680601  624195 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 13:29:24.685128  624195 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 13:29:24.685175  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 13:29:25.072572  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0520 13:29:25.082547  624195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0520 13:29:25.098548  624195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:29:25.114441  624195 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 13:29:25.130086  624195 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 13:29:25.133721  624195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:29:25.145122  624195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:29:25.261442  624195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:29:25.277748  624195 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:29:25.278120  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:29:25.278189  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:29:25.293754  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I0520 13:29:25.294332  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:29:25.294933  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:29:25.294960  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:29:25.295355  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:29:25.295603  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:29:25.295778  624195 start.go:316] joinCluster: &{Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:29:25.295879  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0520 13:29:25.295898  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:29:25.299469  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:29:25.300086  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:29:25.300111  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:29:25.300359  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:29:25.300583  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:29:25.300783  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:29:25.300986  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:29:25.459639  624195 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:29:25.459720  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u17l93.2jyx28d5o2okpqwi --discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-170194-m02 --control-plane --apiserver-advertise-address=192.168.39.155 --apiserver-bind-port=8443"
	I0520 13:29:48.282388  624195 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u17l93.2jyx28d5o2okpqwi --discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-170194-m02 --control-plane --apiserver-advertise-address=192.168.39.155 --apiserver-bind-port=8443": (22.822635673s)
	I0520 13:29:48.282441  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0520 13:29:48.786469  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-170194-m02 minikube.k8s.io/updated_at=2024_05_20T13_29_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45 minikube.k8s.io/name=ha-170194 minikube.k8s.io/primary=false
	I0520 13:29:48.922273  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-170194-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0520 13:29:49.030083  624195 start.go:318] duration metric: took 23.734298138s to joinCluster
	I0520 13:29:49.030205  624195 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:29:49.032631  624195 out.go:177] * Verifying Kubernetes components...
	I0520 13:29:49.030513  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:29:49.035244  624195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:29:49.315891  624195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:29:49.348786  624195 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 13:29:49.349119  624195 kapi.go:59] client config for ha-170194: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.crt", KeyFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key", CAFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0520 13:29:49.349218  624195 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.92:8443
	I0520 13:29:49.349484  624195 node_ready.go:35] waiting up to 6m0s for node "ha-170194-m02" to be "Ready" ...
	I0520 13:29:49.349577  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:49.349585  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:49.349593  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:49.349596  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:49.359468  624195 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0520 13:29:49.849934  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:49.849961  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:49.849971  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:49.849975  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:49.855145  624195 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 13:29:50.350315  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:50.350348  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:50.350362  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:50.350369  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:50.355497  624195 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 13:29:50.849881  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:50.849913  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:50.849925  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:50.849930  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:50.853109  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:51.350010  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:51.350033  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:51.350041  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:51.350045  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:51.352871  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:51.353399  624195 node_ready.go:53] node "ha-170194-m02" has status "Ready":"False"
	I0520 13:29:51.850494  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:51.850519  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:51.850527  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:51.850532  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:51.853666  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:52.350617  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:52.350644  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:52.350655  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:52.350659  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:52.390069  624195 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0520 13:29:52.850447  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:52.850474  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:52.850486  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:52.850494  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:52.853650  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:53.349939  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:53.349966  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:53.349975  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:53.349980  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:53.353696  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:53.354214  624195 node_ready.go:53] node "ha-170194-m02" has status "Ready":"False"
	I0520 13:29:53.850573  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:53.850604  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:53.850616  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:53.850623  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:53.854185  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:54.350160  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:54.350186  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:54.350198  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:54.350205  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:54.354161  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:54.850115  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:54.850146  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:54.850156  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:54.850164  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:54.854077  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:55.349986  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:55.350013  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:55.350025  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:55.350033  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:55.353457  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:55.850027  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:55.850050  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:55.850058  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:55.850062  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:55.853733  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:55.854520  624195 node_ready.go:53] node "ha-170194-m02" has status "Ready":"False"
	I0520 13:29:56.349897  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:56.349926  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.349934  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.349937  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.353540  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:56.354372  624195 node_ready.go:49] node "ha-170194-m02" has status "Ready":"True"
	I0520 13:29:56.354396  624195 node_ready.go:38] duration metric: took 7.004890219s for node "ha-170194-m02" to be "Ready" ...
	I0520 13:29:56.354409  624195 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:29:56.354495  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:29:56.354509  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.354520  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.354527  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.359557  624195 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 13:29:56.365455  624195 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-s28r6" in "kube-system" namespace to be "Ready" ...
	I0520 13:29:56.365593  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-s28r6
	I0520 13:29:56.365607  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.365618  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.365626  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.369109  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:56.369824  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:29:56.369843  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.369852  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.369856  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.372341  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:56.373106  624195 pod_ready.go:92] pod "coredns-7db6d8ff4d-s28r6" in "kube-system" namespace has status "Ready":"True"
	I0520 13:29:56.373128  624195 pod_ready.go:81] duration metric: took 7.643435ms for pod "coredns-7db6d8ff4d-s28r6" in "kube-system" namespace to be "Ready" ...
	I0520 13:29:56.373140  624195 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vk78q" in "kube-system" namespace to be "Ready" ...
	I0520 13:29:56.373218  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vk78q
	I0520 13:29:56.373229  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.373239  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.373272  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.375884  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:56.376473  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:29:56.376487  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.376493  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.376941  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.380502  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:56.380962  624195 pod_ready.go:92] pod "coredns-7db6d8ff4d-vk78q" in "kube-system" namespace has status "Ready":"True"
	I0520 13:29:56.380987  624195 pod_ready.go:81] duration metric: took 7.835879ms for pod "coredns-7db6d8ff4d-vk78q" in "kube-system" namespace to be "Ready" ...
	I0520 13:29:56.380998  624195 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:29:56.381057  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194
	I0520 13:29:56.381065  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.381072  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.381079  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.383534  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:56.384029  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:29:56.384043  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.384050  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.384054  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.386238  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:56.386702  624195 pod_ready.go:92] pod "etcd-ha-170194" in "kube-system" namespace has status "Ready":"True"
	I0520 13:29:56.386723  624195 pod_ready.go:81] duration metric: took 5.714217ms for pod "etcd-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:29:56.386731  624195 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:29:56.386782  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:29:56.386790  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.386796  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.386799  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.389074  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:56.389693  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:56.389709  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.389720  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.389724  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.391967  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:56.888044  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:29:56.888083  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.888102  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.888111  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.892011  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:56.892734  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:56.892754  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.892764  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.892769  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.895694  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:57.387037  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:29:57.387073  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:57.387082  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:57.387087  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:57.391196  624195 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 13:29:57.392147  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:57.392165  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:57.392173  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:57.392176  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:57.395968  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:57.887577  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:29:57.887607  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:57.887616  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:57.887620  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:57.891376  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:57.892031  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:57.892050  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:57.892057  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:57.892061  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:57.894823  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:58.387075  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:29:58.387101  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:58.387109  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:58.387113  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:58.391171  624195 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 13:29:58.392063  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:58.392084  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:58.392094  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:58.392100  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:58.395375  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:58.395926  624195 pod_ready.go:102] pod "etcd-ha-170194-m02" in "kube-system" namespace has status "Ready":"False"
	I0520 13:29:58.887336  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:29:58.887365  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:58.887375  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:58.887379  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:58.890962  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:58.891646  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:58.891668  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:58.891676  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:58.891680  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:58.894580  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:59.387408  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:29:59.387434  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:59.387441  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:59.387444  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:59.391372  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:59.391902  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:59.391918  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:59.391925  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:59.391929  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:59.395200  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:59.887604  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:29:59.887632  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:59.887640  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:59.887643  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:59.890788  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:59.891549  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:59.891569  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:59.891582  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:59.891587  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:59.894490  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:00.387384  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:30:00.387409  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:00.387418  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:00.387422  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:00.391021  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:00.391902  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:00.391922  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:00.391929  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:00.391933  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:00.394751  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:00.887686  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:30:00.887713  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:00.887721  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:00.887725  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:00.890982  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:00.891621  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:00.891636  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:00.891643  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:00.891647  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:00.894420  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:00.894843  624195 pod_ready.go:102] pod "etcd-ha-170194-m02" in "kube-system" namespace has status "Ready":"False"
	I0520 13:30:01.387197  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:30:01.387224  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:01.387234  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:01.387237  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:01.390720  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:01.391270  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:01.391284  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:01.391291  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:01.391295  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:01.393881  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:01.887956  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:30:01.887999  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:01.888009  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:01.888014  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:01.891512  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:01.892189  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:01.892206  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:01.892214  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:01.892217  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:01.895080  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:02.387046  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:30:02.387082  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.387091  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.387095  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.390493  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:02.391270  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:02.391293  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.391302  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.391311  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.394529  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:02.887590  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:30:02.887620  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.887633  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.887639  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.891110  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:02.891791  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:02.891809  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.891815  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.891818  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.894764  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:02.895304  624195 pod_ready.go:92] pod "etcd-ha-170194-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:02.895325  624195 pod_ready.go:81] duration metric: took 6.508587194s for pod "etcd-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:02.895340  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:02.895404  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170194
	I0520 13:30:02.895411  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.895417  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.895423  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.897809  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:02.898741  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:30:02.898760  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.898771  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.898776  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.901124  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:02.901607  624195 pod_ready.go:92] pod "kube-apiserver-ha-170194" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:02.901627  624195 pod_ready.go:81] duration metric: took 6.278538ms for pod "kube-apiserver-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:02.901637  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:02.901689  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170194-m02
	I0520 13:30:02.901697  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.901704  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.901709  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.904714  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:02.905672  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:02.905690  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.905703  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.905709  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.908693  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:02.909421  624195 pod_ready.go:92] pod "kube-apiserver-ha-170194-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:02.909443  624195 pod_ready.go:81] duration metric: took 7.798305ms for pod "kube-apiserver-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:02.909456  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:02.909524  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194
	I0520 13:30:02.909535  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.909545  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.909555  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.912613  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:02.913317  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:30:02.913333  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.913344  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.913349  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.916309  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:02.916815  624195 pod_ready.go:92] pod "kube-controller-manager-ha-170194" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:02.916838  624195 pod_ready.go:81] duration metric: took 7.36995ms for pod "kube-controller-manager-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:02.916850  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:02.950325  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194-m02
	I0520 13:30:02.950355  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.950367  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.950372  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.953590  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:03.150736  624195 request.go:629] Waited for 196.368698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:03.150799  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:03.150805  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:03.150812  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:03.150818  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:03.156970  624195 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 13:30:03.157575  624195 pod_ready.go:92] pod "kube-controller-manager-ha-170194-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:03.157602  624195 pod_ready.go:81] duration metric: took 240.743475ms for pod "kube-controller-manager-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:03.157618  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7ncvb" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:03.350912  624195 request.go:629] Waited for 193.189896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7ncvb
	I0520 13:30:03.350994  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7ncvb
	I0520 13:30:03.351001  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:03.351020  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:03.351025  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:03.357026  624195 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 13:30:03.550292  624195 request.go:629] Waited for 192.362807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:03.550393  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:03.550405  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:03.550414  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:03.550421  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:03.553901  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:03.554429  624195 pod_ready.go:92] pod "kube-proxy-7ncvb" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:03.554449  624195 pod_ready.go:81] duration metric: took 396.823504ms for pod "kube-proxy-7ncvb" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:03.554460  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qth8f" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:03.750598  624195 request.go:629] Waited for 196.037132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qth8f
	I0520 13:30:03.750703  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qth8f
	I0520 13:30:03.750715  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:03.750726  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:03.750744  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:03.754602  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:03.950718  624195 request.go:629] Waited for 195.384951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:30:03.950797  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:30:03.950804  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:03.950815  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:03.950822  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:03.953903  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:03.954503  624195 pod_ready.go:92] pod "kube-proxy-qth8f" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:03.954524  624195 pod_ready.go:81] duration metric: took 400.058159ms for pod "kube-proxy-qth8f" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:03.954534  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:04.150643  624195 request.go:629] Waited for 196.034483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194
	I0520 13:30:04.150730  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194
	I0520 13:30:04.150749  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:04.150778  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:04.150784  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:04.153734  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:04.350698  624195 request.go:629] Waited for 196.367794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:30:04.350777  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:30:04.350784  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:04.350795  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:04.350807  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:04.354298  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:04.354849  624195 pod_ready.go:92] pod "kube-scheduler-ha-170194" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:04.354871  624195 pod_ready.go:81] duration metric: took 400.328782ms for pod "kube-scheduler-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:04.354883  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:04.550951  624195 request.go:629] Waited for 195.969359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194-m02
	I0520 13:30:04.551018  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194-m02
	I0520 13:30:04.551023  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:04.551034  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:04.551039  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:04.554386  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:04.750250  624195 request.go:629] Waited for 195.258698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:04.750314  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:04.750319  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:04.750326  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:04.750332  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:04.753603  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:04.754104  624195 pod_ready.go:92] pod "kube-scheduler-ha-170194-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:04.754127  624195 pod_ready.go:81] duration metric: took 399.235803ms for pod "kube-scheduler-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:04.754142  624195 pod_ready.go:38] duration metric: took 8.399714217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:30:04.754162  624195 api_server.go:52] waiting for apiserver process to appear ...
	I0520 13:30:04.754227  624195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:30:04.768939  624195 api_server.go:72] duration metric: took 15.738685815s to wait for apiserver process to appear ...
	I0520 13:30:04.768965  624195 api_server.go:88] waiting for apiserver healthz status ...
	I0520 13:30:04.768989  624195 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0520 13:30:04.775044  624195 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0520 13:30:04.775111  624195 round_trippers.go:463] GET https://192.168.39.92:8443/version
	I0520 13:30:04.775116  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:04.775125  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:04.775130  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:04.776778  624195 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 13:30:04.777087  624195 api_server.go:141] control plane version: v1.30.1
	I0520 13:30:04.777108  624195 api_server.go:131] duration metric: took 8.137141ms to wait for apiserver health ...
	I0520 13:30:04.777116  624195 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 13:30:04.949976  624195 request.go:629] Waited for 172.782765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:30:04.950054  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:30:04.950059  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:04.950067  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:04.950073  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:04.955312  624195 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 13:30:04.959435  624195 system_pods.go:59] 17 kube-system pods found
	I0520 13:30:04.959467  624195 system_pods.go:61] "coredns-7db6d8ff4d-s28r6" [b161e3ee-7969-4861-9778-3bc34356d792] Running
	I0520 13:30:04.959472  624195 system_pods.go:61] "coredns-7db6d8ff4d-vk78q" [334eb0ed-c771-4840-92d1-04c1b9ec5179] Running
	I0520 13:30:04.959476  624195 system_pods.go:61] "etcd-ha-170194" [ded07afc-66d3-496a-acd2-f802c27e5ea9] Running
	I0520 13:30:04.959480  624195 system_pods.go:61] "etcd-ha-170194-m02" [2c007933-6349-40d1-ac8c-05e5cc30d126] Running
	I0520 13:30:04.959485  624195 system_pods.go:61] "kindnet-5mg44" [5d873b63-664b-431b-8d06-3d5d69e3f6a5] Running
	I0520 13:30:04.959489  624195 system_pods.go:61] "kindnet-cmd8x" [44545bda-e29b-44d6-97f7-45290fda6e37] Running
	I0520 13:30:04.959493  624195 system_pods.go:61] "kube-apiserver-ha-170194" [2700e177-dce9-4317-85a7-b067ebeebb90] Running
	I0520 13:30:04.959496  624195 system_pods.go:61] "kube-apiserver-ha-170194-m02" [78430b5f-64e7-413a-afd0-6eac06f49c9e] Running
	I0520 13:30:04.959499  624195 system_pods.go:61] "kube-controller-manager-ha-170194" [c4fc6771-8fe2-4ba4-96f6-f1a81c064745] Running
	I0520 13:30:04.959503  624195 system_pods.go:61] "kube-controller-manager-ha-170194-m02" [6104df8e-90a8-4a4f-8099-b8ae8bb5c37d] Running
	I0520 13:30:04.959506  624195 system_pods.go:61] "kube-proxy-7ncvb" [647cc75f-67fb-4921-812e-73f41fca3289] Running
	I0520 13:30:04.959509  624195 system_pods.go:61] "kube-proxy-qth8f" [fc43fd92-69c8-419e-9f78-0b5d489b561a] Running
	I0520 13:30:04.959512  624195 system_pods.go:61] "kube-scheduler-ha-170194" [b4a2557e-86ae-4779-89a3-d0ab45de3ff4] Running
	I0520 13:30:04.959515  624195 system_pods.go:61] "kube-scheduler-ha-170194-m02" [6fb368ac-5c00-4114-b825-72db739a812b] Running
	I0520 13:30:04.959518  624195 system_pods.go:61] "kube-vip-ha-170194" [aed1bd37-f323-4950-b9d0-43e5e2eef5b7] Running
	I0520 13:30:04.959521  624195 system_pods.go:61] "kube-vip-ha-170194-m02" [8dd92207-13d4-4a34-83b9-73d84a185230] Running
	I0520 13:30:04.959525  624195 system_pods.go:61] "storage-provisioner" [ce0e094e-1f65-407a-9fc0-a3c55f9de344] Running
	I0520 13:30:04.959534  624195 system_pods.go:74] duration metric: took 182.412172ms to wait for pod list to return data ...
	I0520 13:30:04.959545  624195 default_sa.go:34] waiting for default service account to be created ...
	I0520 13:30:05.149981  624195 request.go:629] Waited for 190.331678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/default/serviceaccounts
	I0520 13:30:05.150064  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/default/serviceaccounts
	I0520 13:30:05.150072  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:05.150086  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:05.150098  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:05.152988  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:05.153328  624195 default_sa.go:45] found service account: "default"
	I0520 13:30:05.153350  624195 default_sa.go:55] duration metric: took 193.798364ms for default service account to be created ...
	I0520 13:30:05.153362  624195 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 13:30:05.350823  624195 request.go:629] Waited for 197.348911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:30:05.350904  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:30:05.350912  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:05.350920  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:05.350933  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:05.356207  624195 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 13:30:05.360566  624195 system_pods.go:86] 17 kube-system pods found
	I0520 13:30:05.360598  624195 system_pods.go:89] "coredns-7db6d8ff4d-s28r6" [b161e3ee-7969-4861-9778-3bc34356d792] Running
	I0520 13:30:05.360603  624195 system_pods.go:89] "coredns-7db6d8ff4d-vk78q" [334eb0ed-c771-4840-92d1-04c1b9ec5179] Running
	I0520 13:30:05.360607  624195 system_pods.go:89] "etcd-ha-170194" [ded07afc-66d3-496a-acd2-f802c27e5ea9] Running
	I0520 13:30:05.360611  624195 system_pods.go:89] "etcd-ha-170194-m02" [2c007933-6349-40d1-ac8c-05e5cc30d126] Running
	I0520 13:30:05.360616  624195 system_pods.go:89] "kindnet-5mg44" [5d873b63-664b-431b-8d06-3d5d69e3f6a5] Running
	I0520 13:30:05.360620  624195 system_pods.go:89] "kindnet-cmd8x" [44545bda-e29b-44d6-97f7-45290fda6e37] Running
	I0520 13:30:05.360624  624195 system_pods.go:89] "kube-apiserver-ha-170194" [2700e177-dce9-4317-85a7-b067ebeebb90] Running
	I0520 13:30:05.360628  624195 system_pods.go:89] "kube-apiserver-ha-170194-m02" [78430b5f-64e7-413a-afd0-6eac06f49c9e] Running
	I0520 13:30:05.360633  624195 system_pods.go:89] "kube-controller-manager-ha-170194" [c4fc6771-8fe2-4ba4-96f6-f1a81c064745] Running
	I0520 13:30:05.360638  624195 system_pods.go:89] "kube-controller-manager-ha-170194-m02" [6104df8e-90a8-4a4f-8099-b8ae8bb5c37d] Running
	I0520 13:30:05.360647  624195 system_pods.go:89] "kube-proxy-7ncvb" [647cc75f-67fb-4921-812e-73f41fca3289] Running
	I0520 13:30:05.360656  624195 system_pods.go:89] "kube-proxy-qth8f" [fc43fd92-69c8-419e-9f78-0b5d489b561a] Running
	I0520 13:30:05.360665  624195 system_pods.go:89] "kube-scheduler-ha-170194" [b4a2557e-86ae-4779-89a3-d0ab45de3ff4] Running
	I0520 13:30:05.360670  624195 system_pods.go:89] "kube-scheduler-ha-170194-m02" [6fb368ac-5c00-4114-b825-72db739a812b] Running
	I0520 13:30:05.360677  624195 system_pods.go:89] "kube-vip-ha-170194" [aed1bd37-f323-4950-b9d0-43e5e2eef5b7] Running
	I0520 13:30:05.360682  624195 system_pods.go:89] "kube-vip-ha-170194-m02" [8dd92207-13d4-4a34-83b9-73d84a185230] Running
	I0520 13:30:05.360689  624195 system_pods.go:89] "storage-provisioner" [ce0e094e-1f65-407a-9fc0-a3c55f9de344] Running
	I0520 13:30:05.360694  624195 system_pods.go:126] duration metric: took 207.327087ms to wait for k8s-apps to be running ...
	I0520 13:30:05.360705  624195 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 13:30:05.360766  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:30:05.375858  624195 system_svc.go:56] duration metric: took 15.138836ms WaitForService to wait for kubelet
	I0520 13:30:05.375893  624195 kubeadm.go:576] duration metric: took 16.345645729s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 13:30:05.375920  624195 node_conditions.go:102] verifying NodePressure condition ...
	I0520 13:30:05.550424  624195 request.go:629] Waited for 174.382572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes
	I0520 13:30:05.550499  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes
	I0520 13:30:05.550504  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:05.550512  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:05.550517  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:05.554088  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:05.555225  624195 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 13:30:05.555258  624195 node_conditions.go:123] node cpu capacity is 2
	I0520 13:30:05.555296  624195 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 13:30:05.555305  624195 node_conditions.go:123] node cpu capacity is 2
	I0520 13:30:05.555312  624195 node_conditions.go:105] duration metric: took 179.386895ms to run NodePressure ...
	I0520 13:30:05.555329  624195 start.go:240] waiting for startup goroutines ...
	I0520 13:30:05.555366  624195 start.go:254] writing updated cluster config ...
	I0520 13:30:05.558538  624195 out.go:177] 
	I0520 13:30:05.561741  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:30:05.561844  624195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:30:05.564790  624195 out.go:177] * Starting "ha-170194-m03" control-plane node in "ha-170194" cluster
	I0520 13:30:05.566924  624195 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:30:05.566982  624195 cache.go:56] Caching tarball of preloaded images
	I0520 13:30:05.567157  624195 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:30:05.567172  624195 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 13:30:05.567277  624195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:30:05.567499  624195 start.go:360] acquireMachinesLock for ha-170194-m03: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:30:05.567549  624195 start.go:364] duration metric: took 28.384µs to acquireMachinesLock for "ha-170194-m03"
	I0520 13:30:05.567564  624195 start.go:93] Provisioning new machine with config: &{Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:30:05.567660  624195 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0520 13:30:05.570348  624195 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 13:30:05.570457  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:30:05.570505  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:30:05.586606  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35957
	I0520 13:30:05.587084  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:30:05.587595  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:30:05.587619  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:30:05.587936  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:30:05.588156  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetMachineName
	I0520 13:30:05.588293  624195 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:30:05.588466  624195 start.go:159] libmachine.API.Create for "ha-170194" (driver="kvm2")
	I0520 13:30:05.588493  624195 client.go:168] LocalClient.Create starting
	I0520 13:30:05.588527  624195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem
	I0520 13:30:05.588563  624195 main.go:141] libmachine: Decoding PEM data...
	I0520 13:30:05.588579  624195 main.go:141] libmachine: Parsing certificate...
	I0520 13:30:05.588628  624195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem
	I0520 13:30:05.588647  624195 main.go:141] libmachine: Decoding PEM data...
	I0520 13:30:05.588659  624195 main.go:141] libmachine: Parsing certificate...
	I0520 13:30:05.588675  624195 main.go:141] libmachine: Running pre-create checks...
	I0520 13:30:05.588683  624195 main.go:141] libmachine: (ha-170194-m03) Calling .PreCreateCheck
	I0520 13:30:05.588822  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetConfigRaw
	I0520 13:30:05.589264  624195 main.go:141] libmachine: Creating machine...
	I0520 13:30:05.589282  624195 main.go:141] libmachine: (ha-170194-m03) Calling .Create
	I0520 13:30:05.589408  624195 main.go:141] libmachine: (ha-170194-m03) Creating KVM machine...
	I0520 13:30:05.590790  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found existing default KVM network
	I0520 13:30:05.590939  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found existing private KVM network mk-ha-170194
	I0520 13:30:05.591089  624195 main.go:141] libmachine: (ha-170194-m03) Setting up store path in /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03 ...
	I0520 13:30:05.591115  624195 main.go:141] libmachine: (ha-170194-m03) Building disk image from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 13:30:05.591169  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:05.591067  625000 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:30:05.591288  624195 main.go:141] libmachine: (ha-170194-m03) Downloading /home/jenkins/minikube-integration/18929-602525/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 13:30:05.855717  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:05.855568  625000 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa...
	I0520 13:30:05.951723  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:05.951593  625000 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/ha-170194-m03.rawdisk...
	I0520 13:30:05.951759  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Writing magic tar header
	I0520 13:30:05.951800  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Writing SSH key tar header
	I0520 13:30:05.951838  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:05.951730  625000 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03 ...
	I0520 13:30:05.951862  624195 main.go:141] libmachine: (ha-170194-m03) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03 (perms=drwx------)
	I0520 13:30:05.951879  624195 main.go:141] libmachine: (ha-170194-m03) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines (perms=drwxr-xr-x)
	I0520 13:30:05.951900  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03
	I0520 13:30:05.951915  624195 main.go:141] libmachine: (ha-170194-m03) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube (perms=drwxr-xr-x)
	I0520 13:30:05.951929  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines
	I0520 13:30:05.951944  624195 main.go:141] libmachine: (ha-170194-m03) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525 (perms=drwxrwxr-x)
	I0520 13:30:05.951960  624195 main.go:141] libmachine: (ha-170194-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 13:30:05.951979  624195 main.go:141] libmachine: (ha-170194-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 13:30:05.951993  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:30:05.952008  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525
	I0520 13:30:05.952022  624195 main.go:141] libmachine: (ha-170194-m03) Creating domain...
	I0520 13:30:05.952031  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 13:30:05.952047  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Checking permissions on dir: /home/jenkins
	I0520 13:30:05.952060  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Checking permissions on dir: /home
	I0520 13:30:05.952072  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Skipping /home - not owner
	I0520 13:30:05.953472  624195 main.go:141] libmachine: (ha-170194-m03) define libvirt domain using xml: 
	I0520 13:30:05.953498  624195 main.go:141] libmachine: (ha-170194-m03) <domain type='kvm'>
	I0520 13:30:05.953504  624195 main.go:141] libmachine: (ha-170194-m03)   <name>ha-170194-m03</name>
	I0520 13:30:05.953511  624195 main.go:141] libmachine: (ha-170194-m03)   <memory unit='MiB'>2200</memory>
	I0520 13:30:05.953528  624195 main.go:141] libmachine: (ha-170194-m03)   <vcpu>2</vcpu>
	I0520 13:30:05.953544  624195 main.go:141] libmachine: (ha-170194-m03)   <features>
	I0520 13:30:05.953550  624195 main.go:141] libmachine: (ha-170194-m03)     <acpi/>
	I0520 13:30:05.953560  624195 main.go:141] libmachine: (ha-170194-m03)     <apic/>
	I0520 13:30:05.953566  624195 main.go:141] libmachine: (ha-170194-m03)     <pae/>
	I0520 13:30:05.953572  624195 main.go:141] libmachine: (ha-170194-m03)     
	I0520 13:30:05.953578  624195 main.go:141] libmachine: (ha-170194-m03)   </features>
	I0520 13:30:05.953584  624195 main.go:141] libmachine: (ha-170194-m03)   <cpu mode='host-passthrough'>
	I0520 13:30:05.953589  624195 main.go:141] libmachine: (ha-170194-m03)   
	I0520 13:30:05.953596  624195 main.go:141] libmachine: (ha-170194-m03)   </cpu>
	I0520 13:30:05.953603  624195 main.go:141] libmachine: (ha-170194-m03)   <os>
	I0520 13:30:05.953613  624195 main.go:141] libmachine: (ha-170194-m03)     <type>hvm</type>
	I0520 13:30:05.953626  624195 main.go:141] libmachine: (ha-170194-m03)     <boot dev='cdrom'/>
	I0520 13:30:05.953633  624195 main.go:141] libmachine: (ha-170194-m03)     <boot dev='hd'/>
	I0520 13:30:05.953645  624195 main.go:141] libmachine: (ha-170194-m03)     <bootmenu enable='no'/>
	I0520 13:30:05.953654  624195 main.go:141] libmachine: (ha-170194-m03)   </os>
	I0520 13:30:05.953659  624195 main.go:141] libmachine: (ha-170194-m03)   <devices>
	I0520 13:30:05.953666  624195 main.go:141] libmachine: (ha-170194-m03)     <disk type='file' device='cdrom'>
	I0520 13:30:05.953675  624195 main.go:141] libmachine: (ha-170194-m03)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/boot2docker.iso'/>
	I0520 13:30:05.953683  624195 main.go:141] libmachine: (ha-170194-m03)       <target dev='hdc' bus='scsi'/>
	I0520 13:30:05.953689  624195 main.go:141] libmachine: (ha-170194-m03)       <readonly/>
	I0520 13:30:05.953695  624195 main.go:141] libmachine: (ha-170194-m03)     </disk>
	I0520 13:30:05.953728  624195 main.go:141] libmachine: (ha-170194-m03)     <disk type='file' device='disk'>
	I0520 13:30:05.953755  624195 main.go:141] libmachine: (ha-170194-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 13:30:05.953771  624195 main.go:141] libmachine: (ha-170194-m03)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/ha-170194-m03.rawdisk'/>
	I0520 13:30:05.953782  624195 main.go:141] libmachine: (ha-170194-m03)       <target dev='hda' bus='virtio'/>
	I0520 13:30:05.953791  624195 main.go:141] libmachine: (ha-170194-m03)     </disk>
	I0520 13:30:05.953799  624195 main.go:141] libmachine: (ha-170194-m03)     <interface type='network'>
	I0520 13:30:05.953810  624195 main.go:141] libmachine: (ha-170194-m03)       <source network='mk-ha-170194'/>
	I0520 13:30:05.953818  624195 main.go:141] libmachine: (ha-170194-m03)       <model type='virtio'/>
	I0520 13:30:05.953841  624195 main.go:141] libmachine: (ha-170194-m03)     </interface>
	I0520 13:30:05.953873  624195 main.go:141] libmachine: (ha-170194-m03)     <interface type='network'>
	I0520 13:30:05.953892  624195 main.go:141] libmachine: (ha-170194-m03)       <source network='default'/>
	I0520 13:30:05.953904  624195 main.go:141] libmachine: (ha-170194-m03)       <model type='virtio'/>
	I0520 13:30:05.953915  624195 main.go:141] libmachine: (ha-170194-m03)     </interface>
	I0520 13:30:05.953925  624195 main.go:141] libmachine: (ha-170194-m03)     <serial type='pty'>
	I0520 13:30:05.953934  624195 main.go:141] libmachine: (ha-170194-m03)       <target port='0'/>
	I0520 13:30:05.953943  624195 main.go:141] libmachine: (ha-170194-m03)     </serial>
	I0520 13:30:05.953949  624195 main.go:141] libmachine: (ha-170194-m03)     <console type='pty'>
	I0520 13:30:05.953963  624195 main.go:141] libmachine: (ha-170194-m03)       <target type='serial' port='0'/>
	I0520 13:30:05.953986  624195 main.go:141] libmachine: (ha-170194-m03)     </console>
	I0520 13:30:05.954007  624195 main.go:141] libmachine: (ha-170194-m03)     <rng model='virtio'>
	I0520 13:30:05.954023  624195 main.go:141] libmachine: (ha-170194-m03)       <backend model='random'>/dev/random</backend>
	I0520 13:30:05.954034  624195 main.go:141] libmachine: (ha-170194-m03)     </rng>
	I0520 13:30:05.954044  624195 main.go:141] libmachine: (ha-170194-m03)     
	I0520 13:30:05.954051  624195 main.go:141] libmachine: (ha-170194-m03)     
	I0520 13:30:05.954062  624195 main.go:141] libmachine: (ha-170194-m03)   </devices>
	I0520 13:30:05.954070  624195 main.go:141] libmachine: (ha-170194-m03) </domain>
	I0520 13:30:05.954078  624195 main.go:141] libmachine: (ha-170194-m03) 
	I0520 13:30:05.962043  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:5d:3c:46 in network default
	I0520 13:30:05.962773  624195 main.go:141] libmachine: (ha-170194-m03) Ensuring networks are active...
	I0520 13:30:05.962808  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:05.963634  624195 main.go:141] libmachine: (ha-170194-m03) Ensuring network default is active
	I0520 13:30:05.963959  624195 main.go:141] libmachine: (ha-170194-m03) Ensuring network mk-ha-170194 is active
	I0520 13:30:05.964293  624195 main.go:141] libmachine: (ha-170194-m03) Getting domain xml...
	I0520 13:30:05.965021  624195 main.go:141] libmachine: (ha-170194-m03) Creating domain...
	I0520 13:30:07.255402  624195 main.go:141] libmachine: (ha-170194-m03) Waiting to get IP...
	I0520 13:30:07.256427  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:07.256890  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:07.256945  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:07.256883  625000 retry.go:31] will retry after 275.904132ms: waiting for machine to come up
	I0520 13:30:07.534625  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:07.535196  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:07.535228  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:07.535150  625000 retry.go:31] will retry after 354.965705ms: waiting for machine to come up
	I0520 13:30:07.891830  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:07.892379  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:07.892418  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:07.892313  625000 retry.go:31] will retry after 448.861988ms: waiting for machine to come up
	I0520 13:30:08.342904  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:08.343449  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:08.343481  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:08.343408  625000 retry.go:31] will retry after 497.367289ms: waiting for machine to come up
	I0520 13:30:08.842056  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:08.842470  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:08.842499  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:08.842428  625000 retry.go:31] will retry after 747.853284ms: waiting for machine to come up
	I0520 13:30:09.591931  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:09.592481  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:09.592515  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:09.592408  625000 retry.go:31] will retry after 600.738064ms: waiting for machine to come up
	I0520 13:30:10.195213  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:10.195595  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:10.195622  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:10.195553  625000 retry.go:31] will retry after 1.013177893s: waiting for machine to come up
	I0520 13:30:11.210907  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:11.211446  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:11.211481  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:11.211368  625000 retry.go:31] will retry after 1.118159499s: waiting for machine to come up
	I0520 13:30:12.330917  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:12.331414  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:12.331438  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:12.331362  625000 retry.go:31] will retry after 1.645480289s: waiting for machine to come up
	I0520 13:30:13.979298  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:13.979838  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:13.979897  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:13.979810  625000 retry.go:31] will retry after 2.237022659s: waiting for machine to come up
	I0520 13:30:16.218340  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:16.218879  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:16.218910  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:16.218826  625000 retry.go:31] will retry after 2.212494575s: waiting for machine to come up
	I0520 13:30:18.434192  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:18.434650  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:18.434679  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:18.434600  625000 retry.go:31] will retry after 3.191824667s: waiting for machine to come up
	I0520 13:30:21.628441  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:21.628825  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:21.628849  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:21.628788  625000 retry.go:31] will retry after 2.775656421s: waiting for machine to come up
	I0520 13:30:24.406421  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:24.406849  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:24.406882  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:24.406800  625000 retry.go:31] will retry after 3.444701645s: waiting for machine to come up
	I0520 13:30:27.854117  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:27.854571  624195 main.go:141] libmachine: (ha-170194-m03) Found IP for machine: 192.168.39.3
	I0520 13:30:27.854593  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has current primary IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:27.854601  624195 main.go:141] libmachine: (ha-170194-m03) Reserving static IP address...
	I0520 13:30:27.854992  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find host DHCP lease matching {name: "ha-170194-m03", mac: "52:54:00:f7:7b:a7", ip: "192.168.39.3"} in network mk-ha-170194
	I0520 13:30:27.932359  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Getting to WaitForSSH function...
	I0520 13:30:27.932385  624195 main.go:141] libmachine: (ha-170194-m03) Reserved static IP address: 192.168.39.3
	I0520 13:30:27.932399  624195 main.go:141] libmachine: (ha-170194-m03) Waiting for SSH to be available...
	I0520 13:30:27.934878  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:27.935312  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194
	I0520 13:30:27.935346  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find defined IP address of network mk-ha-170194 interface with MAC address 52:54:00:f7:7b:a7
	I0520 13:30:27.935542  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Using SSH client type: external
	I0520 13:30:27.935566  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa (-rw-------)
	I0520 13:30:27.935608  624195 main.go:141] libmachine: (ha-170194-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 13:30:27.935629  624195 main.go:141] libmachine: (ha-170194-m03) DBG | About to run SSH command:
	I0520 13:30:27.935647  624195 main.go:141] libmachine: (ha-170194-m03) DBG | exit 0
	I0520 13:30:27.940409  624195 main.go:141] libmachine: (ha-170194-m03) DBG | SSH cmd err, output: exit status 255: 
	I0520 13:30:27.940438  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0520 13:30:27.940481  624195 main.go:141] libmachine: (ha-170194-m03) DBG | command : exit 0
	I0520 13:30:27.940512  624195 main.go:141] libmachine: (ha-170194-m03) DBG | err     : exit status 255
	I0520 13:30:27.940529  624195 main.go:141] libmachine: (ha-170194-m03) DBG | output  : 
	I0520 13:30:30.941487  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Getting to WaitForSSH function...
	I0520 13:30:30.944403  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:30.944860  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:30.944889  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:30.945064  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Using SSH client type: external
	I0520 13:30:30.945166  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa (-rw-------)
	I0520 13:30:30.945195  624195 main.go:141] libmachine: (ha-170194-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 13:30:30.945204  624195 main.go:141] libmachine: (ha-170194-m03) DBG | About to run SSH command:
	I0520 13:30:30.945264  624195 main.go:141] libmachine: (ha-170194-m03) DBG | exit 0
	I0520 13:30:31.069381  624195 main.go:141] libmachine: (ha-170194-m03) DBG | SSH cmd err, output: <nil>: 
	I0520 13:30:31.069705  624195 main.go:141] libmachine: (ha-170194-m03) KVM machine creation complete!
	I0520 13:30:31.070179  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetConfigRaw
	I0520 13:30:31.070838  624195 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:30:31.071068  624195 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:30:31.071215  624195 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 13:30:31.071237  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetState
	I0520 13:30:31.072478  624195 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 13:30:31.072498  624195 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 13:30:31.072504  624195 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 13:30:31.072510  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:31.075064  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.075496  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:31.075528  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.075719  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:31.075920  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.076108  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.076251  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:31.076493  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:30:31.076760  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0520 13:30:31.076775  624195 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 13:30:31.180572  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:30:31.180602  624195 main.go:141] libmachine: Detecting the provisioner...
	I0520 13:30:31.180613  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:31.183547  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.183912  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:31.183935  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.184140  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:31.184355  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.184491  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.184677  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:31.184820  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:30:31.185060  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0520 13:30:31.185081  624195 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 13:30:31.285778  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 13:30:31.285845  624195 main.go:141] libmachine: found compatible host: buildroot
	I0520 13:30:31.285854  624195 main.go:141] libmachine: Provisioning with buildroot...
	I0520 13:30:31.285865  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetMachineName
	I0520 13:30:31.286164  624195 buildroot.go:166] provisioning hostname "ha-170194-m03"
	I0520 13:30:31.286194  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetMachineName
	I0520 13:30:31.286370  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:31.288853  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.289225  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:31.289276  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.289382  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:31.289567  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.289765  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.289918  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:31.290167  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:30:31.290341  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0520 13:30:31.290354  624195 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-170194-m03 && echo "ha-170194-m03" | sudo tee /etc/hostname
	I0520 13:30:31.407000  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-170194-m03
	
	I0520 13:30:31.407034  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:31.410020  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.410487  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:31.410513  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.410772  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:31.411020  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.411193  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.411372  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:31.411570  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:30:31.411761  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0520 13:30:31.411784  624195 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-170194-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-170194-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-170194-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 13:30:31.521414  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:30:31.521456  624195 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 13:30:31.521476  624195 buildroot.go:174] setting up certificates
	I0520 13:30:31.521489  624195 provision.go:84] configureAuth start
	I0520 13:30:31.521500  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetMachineName
	I0520 13:30:31.521821  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:30:31.524618  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.525057  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:31.525088  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.525268  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:31.527520  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.527911  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:31.527937  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.528156  624195 provision.go:143] copyHostCerts
	I0520 13:30:31.528194  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:30:31.528231  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 13:30:31.528240  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:30:31.528303  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 13:30:31.528374  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:30:31.528408  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 13:30:31.528421  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:30:31.528458  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 13:30:31.528526  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:30:31.528548  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 13:30:31.528554  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:30:31.528588  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 13:30:31.528657  624195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.ha-170194-m03 san=[127.0.0.1 192.168.39.3 ha-170194-m03 localhost minikube]
	I0520 13:30:31.628385  624195 provision.go:177] copyRemoteCerts
	I0520 13:30:31.628464  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 13:30:31.628502  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:31.631324  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.631739  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:31.631770  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.631960  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:31.632184  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.632349  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:31.632518  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:30:31.721337  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 13:30:31.721432  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 13:30:31.743764  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 13:30:31.743859  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 13:30:31.767363  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 13:30:31.767462  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 13:30:31.795379  624195 provision.go:87] duration metric: took 273.870594ms to configureAuth
	I0520 13:30:31.795419  624195 buildroot.go:189] setting minikube options for container-runtime
	I0520 13:30:31.795665  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:30:31.795746  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:31.798495  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.798948  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:31.798994  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.799161  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:31.799350  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.799496  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.799675  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:31.799897  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:30:31.800090  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0520 13:30:31.800113  624195 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 13:30:32.073684  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 13:30:32.073714  624195 main.go:141] libmachine: Checking connection to Docker...
	I0520 13:30:32.073723  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetURL
	I0520 13:30:32.075156  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Using libvirt version 6000000
	I0520 13:30:32.077610  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.077972  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:32.078001  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.078234  624195 main.go:141] libmachine: Docker is up and running!
	I0520 13:30:32.078252  624195 main.go:141] libmachine: Reticulating splines...
	I0520 13:30:32.078261  624195 client.go:171] duration metric: took 26.489757298s to LocalClient.Create
	I0520 13:30:32.078288  624195 start.go:167] duration metric: took 26.489823409s to libmachine.API.Create "ha-170194"
	I0520 13:30:32.078298  624195 start.go:293] postStartSetup for "ha-170194-m03" (driver="kvm2")
	I0520 13:30:32.078309  624195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 13:30:32.078331  624195 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:30:32.078592  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 13:30:32.078616  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:32.081048  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.081473  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:32.081494  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.081663  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:32.081879  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:32.082086  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:32.082265  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:30:32.163555  624195 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 13:30:32.168040  624195 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 13:30:32.168079  624195 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 13:30:32.168163  624195 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 13:30:32.168278  624195 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 13:30:32.168292  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /etc/ssl/certs/6098672.pem
	I0520 13:30:32.168411  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 13:30:32.177451  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:30:32.200519  624195 start.go:296] duration metric: took 122.205083ms for postStartSetup
	I0520 13:30:32.200585  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetConfigRaw
	I0520 13:30:32.201271  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:30:32.204064  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.204529  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:32.204561  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.204794  624195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:30:32.205000  624195 start.go:128] duration metric: took 26.637328376s to createHost
	I0520 13:30:32.205036  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:32.207628  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.208082  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:32.208111  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.208299  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:32.208496  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:32.208664  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:32.208798  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:32.208963  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:30:32.209157  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0520 13:30:32.209166  624195 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 13:30:32.313842  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716211832.294893931
	
	I0520 13:30:32.313871  624195 fix.go:216] guest clock: 1716211832.294893931
	I0520 13:30:32.313881  624195 fix.go:229] Guest: 2024-05-20 13:30:32.294893931 +0000 UTC Remote: 2024-05-20 13:30:32.20501386 +0000 UTC m=+157.451878754 (delta=89.880071ms)
	I0520 13:30:32.313910  624195 fix.go:200] guest clock delta is within tolerance: 89.880071ms
	I0520 13:30:32.313917  624195 start.go:83] releasing machines lock for "ha-170194-m03", held for 26.746361199s
	I0520 13:30:32.313941  624195 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:30:32.314262  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:30:32.317143  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.317565  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:32.317592  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.320794  624195 out.go:177] * Found network options:
	I0520 13:30:32.323012  624195 out.go:177]   - NO_PROXY=192.168.39.92,192.168.39.155
	W0520 13:30:32.325151  624195 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 13:30:32.325178  624195 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 13:30:32.325195  624195 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:30:32.325868  624195 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:30:32.326135  624195 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:30:32.326282  624195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 13:30:32.326330  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	W0520 13:30:32.326450  624195 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 13:30:32.326478  624195 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 13:30:32.326551  624195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 13:30:32.326578  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:32.329559  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.329733  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.329971  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:32.329999  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.330027  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:32.330046  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.330200  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:32.330339  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:32.330447  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:32.330547  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:32.330622  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:32.330703  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:32.330764  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:30:32.330850  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:30:32.569233  624195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 13:30:32.574887  624195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 13:30:32.574990  624195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 13:30:32.590259  624195 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 13:30:32.590285  624195 start.go:494] detecting cgroup driver to use...
	I0520 13:30:32.590371  624195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 13:30:32.607145  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 13:30:32.620710  624195 docker.go:217] disabling cri-docker service (if available) ...
	I0520 13:30:32.620766  624195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 13:30:32.636122  624195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 13:30:32.649419  624195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 13:30:32.767377  624195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 13:30:32.904453  624195 docker.go:233] disabling docker service ...
	I0520 13:30:32.904532  624195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 13:30:32.919111  624195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 13:30:32.934079  624195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 13:30:33.065432  624195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 13:30:33.208470  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 13:30:33.221756  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 13:30:33.239327  624195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 13:30:33.239396  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:33.249566  624195 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 13:30:33.249628  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:33.259729  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:33.269936  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:33.280434  624195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 13:30:33.291428  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:33.301588  624195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:33.319083  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:33.329307  624195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 13:30:33.338655  624195 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 13:30:33.338709  624195 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 13:30:33.352806  624195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 13:30:33.362484  624195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:30:33.474132  624195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 13:30:33.604602  624195 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 13:30:33.604688  624195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 13:30:33.609703  624195 start.go:562] Will wait 60s for crictl version
	I0520 13:30:33.609778  624195 ssh_runner.go:195] Run: which crictl
	I0520 13:30:33.614003  624195 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 13:30:33.657808  624195 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 13:30:33.657897  624195 ssh_runner.go:195] Run: crio --version
	I0520 13:30:33.685063  624195 ssh_runner.go:195] Run: crio --version
	I0520 13:30:33.714493  624195 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 13:30:33.716563  624195 out.go:177]   - env NO_PROXY=192.168.39.92
	I0520 13:30:33.718655  624195 out.go:177]   - env NO_PROXY=192.168.39.92,192.168.39.155
	I0520 13:30:33.720506  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:30:33.723281  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:33.723726  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:33.723759  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:33.723993  624195 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 13:30:33.728492  624195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:30:33.740260  624195 mustload.go:65] Loading cluster: ha-170194
	I0520 13:30:33.740552  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:30:33.740896  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:30:33.740940  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:30:33.756043  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41523
	I0520 13:30:33.756479  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:30:33.756976  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:30:33.756998  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:30:33.757399  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:30:33.757626  624195 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:30:33.759265  624195 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:30:33.759544  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:30:33.759589  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:30:33.774705  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40231
	I0520 13:30:33.775152  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:30:33.775634  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:30:33.775657  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:30:33.775953  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:30:33.776165  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:30:33.776372  624195 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194 for IP: 192.168.39.3
	I0520 13:30:33.776384  624195 certs.go:194] generating shared ca certs ...
	I0520 13:30:33.776404  624195 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:30:33.776535  624195 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 13:30:33.776588  624195 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 13:30:33.776602  624195 certs.go:256] generating profile certs ...
	I0520 13:30:33.776691  624195 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key
	I0520 13:30:33.776723  624195 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.c45135b2
	I0520 13:30:33.776747  624195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.c45135b2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.92 192.168.39.155 192.168.39.3 192.168.39.254]
	I0520 13:30:34.113198  624195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.c45135b2 ...
	I0520 13:30:34.113235  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.c45135b2: {Name:mkf5e6820326fafcde9d57b89600ed56eebf0206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:30:34.113459  624195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.c45135b2 ...
	I0520 13:30:34.113479  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.c45135b2: {Name:mk508eb53b19d6075bb0e8a9ef600d6014e40055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:30:34.113580  624195 certs.go:381] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.c45135b2 -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt
	I0520 13:30:34.113736  624195 certs.go:385] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.c45135b2 -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key
	I0520 13:30:34.113902  624195 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key
	I0520 13:30:34.113923  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 13:30:34.113973  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 13:30:34.113996  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 13:30:34.114014  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 13:30:34.114034  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 13:30:34.114053  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 13:30:34.114072  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 13:30:34.114089  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 13:30:34.114155  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem (1338 bytes)
	W0520 13:30:34.114196  624195 certs.go:480] ignoring /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867_empty.pem, impossibly tiny 0 bytes
	I0520 13:30:34.114219  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 13:30:34.114266  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 13:30:34.114297  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:30:34.114335  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 13:30:34.114399  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:30:34.114440  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:30:34.114462  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem -> /usr/share/ca-certificates/609867.pem
	I0520 13:30:34.114479  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /usr/share/ca-certificates/6098672.pem
	I0520 13:30:34.114525  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:30:34.117904  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:30:34.118360  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:30:34.118383  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:30:34.118556  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:30:34.118815  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:30:34.119010  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:30:34.119191  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:30:34.189734  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0520 13:30:34.194562  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0520 13:30:34.213703  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0520 13:30:34.217803  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0520 13:30:34.229298  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0520 13:30:34.233740  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0520 13:30:34.251363  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0520 13:30:34.259057  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0520 13:30:34.272535  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0520 13:30:34.276778  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0520 13:30:34.287992  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0520 13:30:34.291840  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0520 13:30:34.306446  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:30:34.330696  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:30:34.352546  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:30:34.374486  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 13:30:34.395715  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0520 13:30:34.417427  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 13:30:34.440656  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:30:34.463384  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 13:30:34.486723  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:30:34.509426  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem --> /usr/share/ca-certificates/609867.pem (1338 bytes)
	I0520 13:30:34.531288  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /usr/share/ca-certificates/6098672.pem (1708 bytes)
	I0520 13:30:34.553354  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0520 13:30:34.569607  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0520 13:30:34.585507  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0520 13:30:34.600625  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0520 13:30:34.617392  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0520 13:30:34.634444  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0520 13:30:34.651286  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0520 13:30:34.667991  624195 ssh_runner.go:195] Run: openssl version
	I0520 13:30:34.673650  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:30:34.684113  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:30:34.688566  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:30:34.688616  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:30:34.694066  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:30:34.704778  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/609867.pem && ln -fs /usr/share/ca-certificates/609867.pem /etc/ssl/certs/609867.pem"
	I0520 13:30:34.715397  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/609867.pem
	I0520 13:30:34.719759  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 13:30:34.719848  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/609867.pem
	I0520 13:30:34.726249  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/609867.pem /etc/ssl/certs/51391683.0"
	I0520 13:30:34.737957  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6098672.pem && ln -fs /usr/share/ca-certificates/6098672.pem /etc/ssl/certs/6098672.pem"
	I0520 13:30:34.749329  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6098672.pem
	I0520 13:30:34.753506  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 13:30:34.753675  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6098672.pem
	I0520 13:30:34.759062  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6098672.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:30:34.770545  624195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:30:34.774401  624195 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 13:30:34.774457  624195 kubeadm.go:928] updating node {m03 192.168.39.3 8443 v1.30.1 crio true true} ...
	I0520 13:30:34.774558  624195 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-170194-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:30:34.774589  624195 kube-vip.go:115] generating kube-vip config ...
	I0520 13:30:34.774630  624195 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 13:30:34.789410  624195 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 13:30:34.791335  624195 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 13:30:34.791392  624195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 13:30:34.801200  624195 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 13:30:34.801287  624195 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 13:30:34.810988  624195 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0520 13:30:34.810999  624195 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0520 13:30:34.811015  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 13:30:34.810996  624195 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0520 13:30:34.811054  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:30:34.811064  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 13:30:34.811088  624195 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 13:30:34.811141  624195 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 13:30:34.828229  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 13:30:34.828324  624195 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 13:30:34.828347  624195 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 13:30:34.828363  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 13:30:34.828383  624195 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 13:30:34.828407  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 13:30:34.843958  624195 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 13:30:34.844008  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 13:30:35.711844  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0520 13:30:35.721772  624195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0520 13:30:35.739516  624195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:30:35.756221  624195 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 13:30:35.774613  624195 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 13:30:35.778519  624195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:30:35.790710  624195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:30:35.916011  624195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:30:35.933865  624195 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:30:35.934374  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:30:35.934441  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:30:35.950848  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37213
	I0520 13:30:35.951361  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:30:35.951824  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:30:35.951849  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:30:35.952191  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:30:35.952474  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:30:35.952720  624195 start.go:316] joinCluster: &{Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:30:35.952861  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0520 13:30:35.952885  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:30:35.956312  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:30:35.956776  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:30:35.956808  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:30:35.956971  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:30:35.957156  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:30:35.957328  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:30:35.957489  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:30:36.186912  624195 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:30:36.186977  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lsr1q3.gj6neebntzvy8le2 --discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-170194-m03 --control-plane --apiserver-advertise-address=192.168.39.3 --apiserver-bind-port=8443"
	I0520 13:31:05.011535  624195 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lsr1q3.gj6neebntzvy8le2 --discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-170194-m03 --control-plane --apiserver-advertise-address=192.168.39.3 --apiserver-bind-port=8443": (28.824526203s)
	I0520 13:31:05.011580  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0520 13:31:05.524316  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-170194-m03 minikube.k8s.io/updated_at=2024_05_20T13_31_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45 minikube.k8s.io/name=ha-170194 minikube.k8s.io/primary=false
	I0520 13:31:05.658744  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-170194-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0520 13:31:05.798072  624195 start.go:318] duration metric: took 29.845347226s to joinCluster
	I0520 13:31:05.798171  624195 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:31:05.800581  624195 out.go:177] * Verifying Kubernetes components...
	I0520 13:31:05.798564  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:31:05.802637  624195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:31:05.992517  624195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:31:06.013170  624195 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 13:31:06.013455  624195 kapi.go:59] client config for ha-170194: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.crt", KeyFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key", CAFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0520 13:31:06.013560  624195 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.92:8443
	I0520 13:31:06.013797  624195 node_ready.go:35] waiting up to 6m0s for node "ha-170194-m03" to be "Ready" ...
	I0520 13:31:06.013901  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:06.013911  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:06.013920  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:06.013929  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:06.017203  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:06.515041  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:06.515066  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:06.515075  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:06.515078  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:06.519172  624195 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 13:31:07.014089  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:07.014123  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:07.014135  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:07.014142  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:07.017702  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:07.514876  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:07.514902  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:07.514910  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:07.514913  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:07.518287  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:08.014401  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:08.014431  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:08.014440  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:08.014443  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:08.026363  624195 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0520 13:31:08.027598  624195 node_ready.go:53] node "ha-170194-m03" has status "Ready":"False"
	I0520 13:31:08.514624  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:08.514657  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:08.514666  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:08.514672  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:08.518249  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:09.014247  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:09.014273  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:09.014280  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:09.014285  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:09.017946  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:09.514146  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:09.514179  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:09.514190  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:09.514194  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:09.517927  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:10.014405  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:10.014430  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:10.014437  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:10.014442  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:10.018434  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:10.514854  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:10.514883  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:10.514898  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:10.514903  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:10.518625  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:10.519080  624195 node_ready.go:53] node "ha-170194-m03" has status "Ready":"False"
	I0520 13:31:11.014264  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:11.014285  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:11.014295  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:11.014300  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:11.018048  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:11.514545  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:11.514574  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:11.514584  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:11.514592  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:11.518182  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:12.014767  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:12.014791  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:12.014799  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:12.014803  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:12.018424  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:12.514459  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:12.514487  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:12.514496  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:12.514511  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:12.517977  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:13.014776  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:13.014799  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.014807  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.014812  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.018553  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:13.019163  624195 node_ready.go:49] node "ha-170194-m03" has status "Ready":"True"
	I0520 13:31:13.019186  624195 node_ready.go:38] duration metric: took 7.005369464s for node "ha-170194-m03" to be "Ready" ...
	I0520 13:31:13.019204  624195 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:31:13.019298  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:31:13.019310  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.019321  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.019332  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.030581  624195 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0520 13:31:13.037455  624195 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-s28r6" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.037554  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-s28r6
	I0520 13:31:13.037561  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.037572  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.037582  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.041871  624195 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 13:31:13.042775  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:13.042795  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.042802  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.042805  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.047300  624195 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 13:31:13.048039  624195 pod_ready.go:92] pod "coredns-7db6d8ff4d-s28r6" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:13.048065  624195 pod_ready.go:81] duration metric: took 10.575387ms for pod "coredns-7db6d8ff4d-s28r6" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.048078  624195 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vk78q" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.048164  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vk78q
	I0520 13:31:13.048175  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.048186  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.048191  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.052157  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:13.053021  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:13.053041  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.053051  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.053057  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.056084  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:13.056704  624195 pod_ready.go:92] pod "coredns-7db6d8ff4d-vk78q" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:13.056730  624195 pod_ready.go:81] duration metric: took 8.643405ms for pod "coredns-7db6d8ff4d-vk78q" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.056743  624195 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.056829  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194
	I0520 13:31:13.056841  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.056851  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.056856  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.060227  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:13.061330  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:13.061346  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.061353  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.061357  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.063748  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:31:13.064252  624195 pod_ready.go:92] pod "etcd-ha-170194" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:13.064272  624195 pod_ready.go:81] duration metric: took 7.521309ms for pod "etcd-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.064281  624195 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.064430  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:31:13.064450  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.064462  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.064468  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.067471  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:31:13.068335  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:13.068352  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.068360  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.068365  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.070826  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:31:13.071358  624195 pod_ready.go:92] pod "etcd-ha-170194-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:13.071381  624195 pod_ready.go:81] duration metric: took 7.0933ms for pod "etcd-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.071390  624195 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170194-m03" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.215767  624195 request.go:629] Waited for 144.303996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m03
	I0520 13:31:13.215834  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m03
	I0520 13:31:13.215839  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.215847  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.215852  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.219854  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:13.415135  624195 request.go:629] Waited for 194.54887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:13.415199  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:13.415204  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.415212  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.415216  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.418216  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:31:13.615926  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m03
	I0520 13:31:13.615954  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.615966  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.615976  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.619529  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:13.815234  624195 request.go:629] Waited for 194.980132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:13.815321  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:13.815327  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.815335  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.815339  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.818403  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:14.072366  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m03
	I0520 13:31:14.072392  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:14.072400  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:14.072409  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:14.076193  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:14.215202  624195 request.go:629] Waited for 138.334855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:14.215274  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:14.215281  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:14.215293  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:14.215303  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:14.218526  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:14.571911  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m03
	I0520 13:31:14.571936  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:14.571944  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:14.571949  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:14.574981  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:14.615120  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:14.615144  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:14.615157  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:14.615163  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:14.619146  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:15.071983  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m03
	I0520 13:31:15.072025  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:15.072033  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:15.072039  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:15.075738  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:15.076783  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:15.076801  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:15.076813  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:15.076818  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:15.080319  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:15.081093  624195 pod_ready.go:102] pod "etcd-ha-170194-m03" in "kube-system" namespace has status "Ready":"False"
	I0520 13:31:15.572090  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m03
	I0520 13:31:15.572114  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:15.572121  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:15.572125  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:15.575935  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:15.577358  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:15.577374  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:15.577380  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:15.577383  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:15.580077  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:31:16.072328  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m03
	I0520 13:31:16.072370  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:16.072388  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:16.072392  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:16.075823  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:16.076583  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:16.076601  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:16.076612  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:16.076618  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:16.079633  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:31:16.080376  624195 pod_ready.go:92] pod "etcd-ha-170194-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:16.080404  624195 pod_ready.go:81] duration metric: took 3.009005007s for pod "etcd-ha-170194-m03" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:16.080427  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:16.080516  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170194
	I0520 13:31:16.080528  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:16.080539  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:16.080545  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:16.083475  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:31:16.215453  624195 request.go:629] Waited for 131.322215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:16.215521  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:16.215526  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:16.215534  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:16.215537  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:16.218968  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:16.219547  624195 pod_ready.go:92] pod "kube-apiserver-ha-170194" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:16.219581  624195 pod_ready.go:81] duration metric: took 139.142475ms for pod "kube-apiserver-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:16.219600  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:16.414834  624195 request.go:629] Waited for 195.128013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170194-m02
	I0520 13:31:16.414904  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170194-m02
	I0520 13:31:16.414912  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:16.414924  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:16.414931  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:16.418491  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:16.615788  624195 request.go:629] Waited for 196.397178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:16.615904  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:16.615912  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:16.615920  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:16.615926  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:16.619495  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:16.620081  624195 pod_ready.go:92] pod "kube-apiserver-ha-170194-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:16.620102  624195 pod_ready.go:81] duration metric: took 400.491978ms for pod "kube-apiserver-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:16.620115  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170194-m03" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:16.815192  624195 request.go:629] Waited for 194.989325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170194-m03
	I0520 13:31:16.815261  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170194-m03
	I0520 13:31:16.815267  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:16.815274  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:16.815278  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:16.818421  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:17.015533  624195 request.go:629] Waited for 196.24531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:17.015607  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:17.015614  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:17.015624  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:17.015636  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:17.022248  624195 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 13:31:17.023408  624195 pod_ready.go:92] pod "kube-apiserver-ha-170194-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:17.023431  624195 pod_ready.go:81] duration metric: took 403.30886ms for pod "kube-apiserver-ha-170194-m03" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:17.023442  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:17.215734  624195 request.go:629] Waited for 192.175061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194
	I0520 13:31:17.215807  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194
	I0520 13:31:17.215815  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:17.215828  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:17.215836  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:17.219228  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:17.415232  624195 request.go:629] Waited for 195.384768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:17.415324  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:17.415332  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:17.415345  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:17.415355  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:17.419687  624195 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 13:31:17.420313  624195 pod_ready.go:92] pod "kube-controller-manager-ha-170194" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:17.420341  624195 pod_ready.go:81] duration metric: took 396.891022ms for pod "kube-controller-manager-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:17.420356  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:17.615311  624195 request.go:629] Waited for 194.86432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194-m02
	I0520 13:31:17.615384  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194-m02
	I0520 13:31:17.615390  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:17.615402  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:17.615409  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:17.619221  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:17.814817  624195 request.go:629] Waited for 194.943333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:17.814896  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:17.814901  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:17.814910  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:17.814917  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:17.818114  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:17.818728  624195 pod_ready.go:92] pod "kube-controller-manager-ha-170194-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:17.818754  624195 pod_ready.go:81] duration metric: took 398.390202ms for pod "kube-controller-manager-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:17.818768  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170194-m03" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:18.015719  624195 request.go:629] Waited for 196.878935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194-m03
	I0520 13:31:18.015788  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194-m03
	I0520 13:31:18.015793  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:18.015801  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:18.015804  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:18.019535  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:18.215470  624195 request.go:629] Waited for 195.360843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:18.215557  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:18.215562  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:18.215568  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:18.215573  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:18.219147  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:18.415659  624195 request.go:629] Waited for 96.287075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194-m03
	I0520 13:31:18.415765  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194-m03
	I0520 13:31:18.415779  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:18.415790  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:18.415801  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:18.419431  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:18.615483  624195 request.go:629] Waited for 195.37727ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:18.615548  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:18.615554  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:18.615562  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:18.615566  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:18.618117  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:31:18.819673  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194-m03
	I0520 13:31:18.819703  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:18.819714  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:18.819721  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:18.823309  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:19.015398  624195 request.go:629] Waited for 191.428653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:19.015458  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:19.015463  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:19.015471  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:19.015475  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:19.018833  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:19.019547  624195 pod_ready.go:92] pod "kube-controller-manager-ha-170194-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:19.019569  624195 pod_ready.go:81] duration metric: took 1.200793801s for pod "kube-controller-manager-ha-170194-m03" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:19.019585  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7ncvb" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:19.214958  624195 request.go:629] Waited for 195.280082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7ncvb
	I0520 13:31:19.215061  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7ncvb
	I0520 13:31:19.215069  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:19.215080  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:19.215087  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:19.218621  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:19.414947  624195 request.go:629] Waited for 195.319457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:19.415069  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:19.415083  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:19.415093  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:19.415102  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:19.418554  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:19.419277  624195 pod_ready.go:92] pod "kube-proxy-7ncvb" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:19.419309  624195 pod_ready.go:81] duration metric: took 399.714792ms for pod "kube-proxy-7ncvb" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:19.419324  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qth8f" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:19.615253  624195 request.go:629] Waited for 195.822388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qth8f
	I0520 13:31:19.615320  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qth8f
	I0520 13:31:19.615325  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:19.615334  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:19.615341  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:19.619457  624195 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 13:31:19.815371  624195 request.go:629] Waited for 194.935251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:19.815435  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:19.815441  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:19.815449  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:19.815454  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:19.819118  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:19.819715  624195 pod_ready.go:92] pod "kube-proxy-qth8f" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:19.819739  624195 pod_ready.go:81] duration metric: took 400.407376ms for pod "kube-proxy-qth8f" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:19.819749  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x79p4" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:20.015290  624195 request.go:629] Waited for 195.444697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x79p4
	I0520 13:31:20.015376  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x79p4
	I0520 13:31:20.015385  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:20.015396  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:20.015408  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:20.018963  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:20.214916  624195 request.go:629] Waited for 195.313944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:20.215022  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:20.215034  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:20.215045  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:20.215053  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:20.218191  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:20.218728  624195 pod_ready.go:92] pod "kube-proxy-x79p4" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:20.218749  624195 pod_ready.go:81] duration metric: took 398.99258ms for pod "kube-proxy-x79p4" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:20.218758  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:20.415324  624195 request.go:629] Waited for 196.464631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194
	I0520 13:31:20.415398  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194
	I0520 13:31:20.415406  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:20.415417  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:20.415428  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:20.418650  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:20.614968  624195 request.go:629] Waited for 195.495433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:20.615073  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:20.615083  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:20.615096  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:20.615105  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:20.618843  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:20.619666  624195 pod_ready.go:92] pod "kube-scheduler-ha-170194" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:20.619692  624195 pod_ready.go:81] duration metric: took 400.925254ms for pod "kube-scheduler-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:20.619706  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:20.815717  624195 request.go:629] Waited for 195.912804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194-m02
	I0520 13:31:20.815792  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194-m02
	I0520 13:31:20.815797  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:20.815805  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:20.815815  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:20.819303  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:21.015424  624195 request.go:629] Waited for 195.520036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:21.015488  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:21.015493  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:21.015501  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:21.015505  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:21.018661  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:21.019331  624195 pod_ready.go:92] pod "kube-scheduler-ha-170194-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:21.019354  624195 pod_ready.go:81] duration metric: took 399.641422ms for pod "kube-scheduler-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:21.019365  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170194-m03" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:21.215514  624195 request.go:629] Waited for 196.051281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194-m03
	I0520 13:31:21.215610  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194-m03
	I0520 13:31:21.215622  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:21.215633  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:21.215643  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:21.219132  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:21.415037  624195 request.go:629] Waited for 195.328033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:21.415119  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:21.415181  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:21.415195  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:21.415200  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:21.419418  624195 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 13:31:21.420515  624195 pod_ready.go:92] pod "kube-scheduler-ha-170194-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:21.420541  624195 pod_ready.go:81] duration metric: took 401.168267ms for pod "kube-scheduler-ha-170194-m03" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:21.420557  624195 pod_ready.go:38] duration metric: took 8.401336746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:31:21.420582  624195 api_server.go:52] waiting for apiserver process to appear ...
	I0520 13:31:21.420667  624195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:31:21.438240  624195 api_server.go:72] duration metric: took 15.640012749s to wait for apiserver process to appear ...
	I0520 13:31:21.438273  624195 api_server.go:88] waiting for apiserver healthz status ...
	I0520 13:31:21.438293  624195 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0520 13:31:21.442679  624195 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0520 13:31:21.442760  624195 round_trippers.go:463] GET https://192.168.39.92:8443/version
	I0520 13:31:21.442768  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:21.442775  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:21.442783  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:21.443594  624195 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0520 13:31:21.443657  624195 api_server.go:141] control plane version: v1.30.1
	I0520 13:31:21.443671  624195 api_server.go:131] duration metric: took 5.392584ms to wait for apiserver health ...
	I0520 13:31:21.443681  624195 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 13:31:21.615199  624195 request.go:629] Waited for 171.390196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:31:21.615275  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:31:21.615284  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:21.615295  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:21.615303  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:21.622356  624195 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 13:31:21.628951  624195 system_pods.go:59] 24 kube-system pods found
	I0520 13:31:21.628985  624195 system_pods.go:61] "coredns-7db6d8ff4d-s28r6" [b161e3ee-7969-4861-9778-3bc34356d792] Running
	I0520 13:31:21.628991  624195 system_pods.go:61] "coredns-7db6d8ff4d-vk78q" [334eb0ed-c771-4840-92d1-04c1b9ec5179] Running
	I0520 13:31:21.628995  624195 system_pods.go:61] "etcd-ha-170194" [ded07afc-66d3-496a-acd2-f802c27e5ea9] Running
	I0520 13:31:21.628998  624195 system_pods.go:61] "etcd-ha-170194-m02" [2c007933-6349-40d1-ac8c-05e5cc30d126] Running
	I0520 13:31:21.629002  624195 system_pods.go:61] "etcd-ha-170194-m03" [22d1124d-4ec7-4721-94d7-b05ee48e4f04] Running
	I0520 13:31:21.629005  624195 system_pods.go:61] "kindnet-5mg44" [5d873b63-664b-431b-8d06-3d5d69e3f6a5] Running
	I0520 13:31:21.629008  624195 system_pods.go:61] "kindnet-cmd8x" [44545bda-e29b-44d6-97f7-45290fda6e37] Running
	I0520 13:31:21.629011  624195 system_pods.go:61] "kindnet-q72lt" [1ff7bf65-cfec-4a8d-acb6-7177d005791f] Running
	I0520 13:31:21.629014  624195 system_pods.go:61] "kube-apiserver-ha-170194" [2700e177-dce9-4317-85a7-b067ebeebb90] Running
	I0520 13:31:21.629017  624195 system_pods.go:61] "kube-apiserver-ha-170194-m02" [78430b5f-64e7-413a-afd0-6eac06f49c9e] Running
	I0520 13:31:21.629022  624195 system_pods.go:61] "kube-apiserver-ha-170194-m03" [2ab83259-202f-4f75-97ae-7aba8a38638e] Running
	I0520 13:31:21.629025  624195 system_pods.go:61] "kube-controller-manager-ha-170194" [c4fc6771-8fe2-4ba4-96f6-f1a81c064745] Running
	I0520 13:31:21.629028  624195 system_pods.go:61] "kube-controller-manager-ha-170194-m02" [6104df8e-90a8-4a4f-8099-b8ae8bb5c37d] Running
	I0520 13:31:21.629032  624195 system_pods.go:61] "kube-controller-manager-ha-170194-m03" [91e02abe-a8d2-48b0-b883-7d5e2cd184ec] Running
	I0520 13:31:21.629035  624195 system_pods.go:61] "kube-proxy-7ncvb" [647cc75f-67fb-4921-812e-73f41fca3289] Running
	I0520 13:31:21.629038  624195 system_pods.go:61] "kube-proxy-qth8f" [fc43fd92-69c8-419e-9f78-0b5d489b561a] Running
	I0520 13:31:21.629041  624195 system_pods.go:61] "kube-proxy-x79p4" [20b12a4a-7f86-4521-9711-7b7efcf74995] Running
	I0520 13:31:21.629047  624195 system_pods.go:61] "kube-scheduler-ha-170194" [b4a2557e-86ae-4779-89a3-d0ab45de3ff4] Running
	I0520 13:31:21.629050  624195 system_pods.go:61] "kube-scheduler-ha-170194-m02" [6fb368ac-5c00-4114-b825-72db739a812b] Running
	I0520 13:31:21.629056  624195 system_pods.go:61] "kube-scheduler-ha-170194-m03" [5249cfdc-cb02-440e-aee3-a44444184426] Running
	I0520 13:31:21.629059  624195 system_pods.go:61] "kube-vip-ha-170194" [aed1bd37-f323-4950-b9d0-43e5e2eef5b7] Running
	I0520 13:31:21.629061  624195 system_pods.go:61] "kube-vip-ha-170194-m02" [8dd92207-13d4-4a34-83b9-73d84a185230] Running
	I0520 13:31:21.629067  624195 system_pods.go:61] "kube-vip-ha-170194-m03" [29f858fa-1de2-4632-ae1a-30847a60fa99] Running
	I0520 13:31:21.629072  624195 system_pods.go:61] "storage-provisioner" [ce0e094e-1f65-407a-9fc0-a3c55f9de344] Running
	I0520 13:31:21.629078  624195 system_pods.go:74] duration metric: took 185.392781ms to wait for pod list to return data ...
	I0520 13:31:21.629092  624195 default_sa.go:34] waiting for default service account to be created ...
	I0520 13:31:21.815526  624195 request.go:629] Waited for 186.337056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/default/serviceaccounts
	I0520 13:31:21.815589  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/default/serviceaccounts
	I0520 13:31:21.815600  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:21.815608  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:21.815613  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:21.819248  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:21.819417  624195 default_sa.go:45] found service account: "default"
	I0520 13:31:21.819441  624195 default_sa.go:55] duration metric: took 190.34107ms for default service account to be created ...
	I0520 13:31:21.819453  624195 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 13:31:22.014877  624195 request.go:629] Waited for 195.3227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:31:22.014940  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:31:22.014945  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:22.014953  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:22.014956  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:22.022443  624195 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 13:31:22.028413  624195 system_pods.go:86] 24 kube-system pods found
	I0520 13:31:22.028450  624195 system_pods.go:89] "coredns-7db6d8ff4d-s28r6" [b161e3ee-7969-4861-9778-3bc34356d792] Running
	I0520 13:31:22.028455  624195 system_pods.go:89] "coredns-7db6d8ff4d-vk78q" [334eb0ed-c771-4840-92d1-04c1b9ec5179] Running
	I0520 13:31:22.028460  624195 system_pods.go:89] "etcd-ha-170194" [ded07afc-66d3-496a-acd2-f802c27e5ea9] Running
	I0520 13:31:22.028465  624195 system_pods.go:89] "etcd-ha-170194-m02" [2c007933-6349-40d1-ac8c-05e5cc30d126] Running
	I0520 13:31:22.028469  624195 system_pods.go:89] "etcd-ha-170194-m03" [22d1124d-4ec7-4721-94d7-b05ee48e4f04] Running
	I0520 13:31:22.028473  624195 system_pods.go:89] "kindnet-5mg44" [5d873b63-664b-431b-8d06-3d5d69e3f6a5] Running
	I0520 13:31:22.028477  624195 system_pods.go:89] "kindnet-cmd8x" [44545bda-e29b-44d6-97f7-45290fda6e37] Running
	I0520 13:31:22.028481  624195 system_pods.go:89] "kindnet-q72lt" [1ff7bf65-cfec-4a8d-acb6-7177d005791f] Running
	I0520 13:31:22.028485  624195 system_pods.go:89] "kube-apiserver-ha-170194" [2700e177-dce9-4317-85a7-b067ebeebb90] Running
	I0520 13:31:22.028489  624195 system_pods.go:89] "kube-apiserver-ha-170194-m02" [78430b5f-64e7-413a-afd0-6eac06f49c9e] Running
	I0520 13:31:22.028493  624195 system_pods.go:89] "kube-apiserver-ha-170194-m03" [2ab83259-202f-4f75-97ae-7aba8a38638e] Running
	I0520 13:31:22.028497  624195 system_pods.go:89] "kube-controller-manager-ha-170194" [c4fc6771-8fe2-4ba4-96f6-f1a81c064745] Running
	I0520 13:31:22.028501  624195 system_pods.go:89] "kube-controller-manager-ha-170194-m02" [6104df8e-90a8-4a4f-8099-b8ae8bb5c37d] Running
	I0520 13:31:22.028509  624195 system_pods.go:89] "kube-controller-manager-ha-170194-m03" [91e02abe-a8d2-48b0-b883-7d5e2cd184ec] Running
	I0520 13:31:22.028513  624195 system_pods.go:89] "kube-proxy-7ncvb" [647cc75f-67fb-4921-812e-73f41fca3289] Running
	I0520 13:31:22.028517  624195 system_pods.go:89] "kube-proxy-qth8f" [fc43fd92-69c8-419e-9f78-0b5d489b561a] Running
	I0520 13:31:22.028521  624195 system_pods.go:89] "kube-proxy-x79p4" [20b12a4a-7f86-4521-9711-7b7efcf74995] Running
	I0520 13:31:22.028525  624195 system_pods.go:89] "kube-scheduler-ha-170194" [b4a2557e-86ae-4779-89a3-d0ab45de3ff4] Running
	I0520 13:31:22.028528  624195 system_pods.go:89] "kube-scheduler-ha-170194-m02" [6fb368ac-5c00-4114-b825-72db739a812b] Running
	I0520 13:31:22.028535  624195 system_pods.go:89] "kube-scheduler-ha-170194-m03" [5249cfdc-cb02-440e-aee3-a44444184426] Running
	I0520 13:31:22.028540  624195 system_pods.go:89] "kube-vip-ha-170194" [aed1bd37-f323-4950-b9d0-43e5e2eef5b7] Running
	I0520 13:31:22.028547  624195 system_pods.go:89] "kube-vip-ha-170194-m02" [8dd92207-13d4-4a34-83b9-73d84a185230] Running
	I0520 13:31:22.028550  624195 system_pods.go:89] "kube-vip-ha-170194-m03" [29f858fa-1de2-4632-ae1a-30847a60fa99] Running
	I0520 13:31:22.028555  624195 system_pods.go:89] "storage-provisioner" [ce0e094e-1f65-407a-9fc0-a3c55f9de344] Running
	I0520 13:31:22.028561  624195 system_pods.go:126] duration metric: took 209.098779ms to wait for k8s-apps to be running ...
	I0520 13:31:22.028573  624195 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 13:31:22.028622  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:31:22.045782  624195 system_svc.go:56] duration metric: took 17.199492ms WaitForService to wait for kubelet
	I0520 13:31:22.045815  624195 kubeadm.go:576] duration metric: took 16.247602675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 13:31:22.045835  624195 node_conditions.go:102] verifying NodePressure condition ...
	I0520 13:31:22.215313  624195 request.go:629] Waited for 169.380053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes
	I0520 13:31:22.215376  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes
	I0520 13:31:22.215381  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:22.215389  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:22.215394  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:22.219272  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:22.220152  624195 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 13:31:22.220187  624195 node_conditions.go:123] node cpu capacity is 2
	I0520 13:31:22.220199  624195 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 13:31:22.220203  624195 node_conditions.go:123] node cpu capacity is 2
	I0520 13:31:22.220207  624195 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 13:31:22.220210  624195 node_conditions.go:123] node cpu capacity is 2
	I0520 13:31:22.220214  624195 node_conditions.go:105] duration metric: took 174.37435ms to run NodePressure ...
	I0520 13:31:22.220228  624195 start.go:240] waiting for startup goroutines ...
	I0520 13:31:22.220258  624195 start.go:254] writing updated cluster config ...
	I0520 13:31:22.220619  624195 ssh_runner.go:195] Run: rm -f paused
	I0520 13:31:22.275580  624195 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 13:31:22.280049  624195 out.go:177] * Done! kubectl is now configured to use "ha-170194" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.867172482Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716212088867147482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d54ebbb-14e9-4ef9-b2a9-eeb3182bdf28 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.867742471Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=556d4556-0806-4244-bd2c-51fb0c9413a5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.867814642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=556d4556-0806-4244-bd2c-51fb0c9413a5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.868201997Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf740d9b5f06de9c1c7f18ff96ae4e526c6be4ceefd003d80cbf78d2a4a8de2e,PodSandboxId:85c1015ea36da8c46f104ff7ec479b35768c9d5a0cfc6b141e2355692dadb1b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716211886372227196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kubernetes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea85179fd0503aa9f2a864a7e621c5f12560217b135dee98c2f4cea9a4e5e59,PodSandboxId:109392450c01eea851e9af5e8cf4458df6fc6c142890a5c82c06a9767bc7d9d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716211730573499646,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4,PodSandboxId:cb6f21c242e20885a848e6e54b17b052b76485170e8d42e9306861f8b60b9773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211729633688588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583,PodSandboxId:901f35680bee52a4fdcd71f44cbdd95bb19dfa886f3ea6b532d85fee30bffbe9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211729610085205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-79
69-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef86504a6a21868d78111ee02e2dfffac4c0417e767d4026c693a1536b8b19d2,PodSandboxId:259c31fa9472ee17dcae28eb41a2503cf3c4fb8fb931b8a752f052c76b450a99,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17162117
27530589271,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b,PodSandboxId:ef9cc40406ad74740df8eabc268ef219b5ddbbd891b2bbcb11b6cb63e0559867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211727429880415,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334824a1ffd8bfbb3cb38a60c62bafa0a027c3c9f2d464b2aece3440327048f5,PodSandboxId:c2f00aa61309b82dd801eaee80f25d7456c5a1fb193b9cbd8f1ec4acdad1e5e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716211711072316006,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c2ce26b33d78866ea1447fa8f1385b,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2,PodSandboxId:0a5e941c6740dcbed3cd2281373ada1e41c0ae9be65f6fcbdf489717a76cdcf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211708078980294,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40d2be6b414dbb4aff49d9c9270179b8ef07ff0fd8215c35a69afa89c8b9a23,PodSandboxId:dfcd6dd7a8d331f382c0a93853e63559ce403346268f7c857edca59dfd91ff3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211708087228752,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8,PodSandboxId:1a02a71cebea30837cb4dfa8906ebe6d049ffc18862ce9371a6c5f6de47e2edf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211708025382815,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0dc1542ea21a6985bbc8673bdd0d29ddd6774c864aa059ea5d9aa8552ed47fa,PodSandboxId:e89b9ecab8ffc46ed6d073642eaca55de22955816f35f838c4127fc26356bd30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211707995210035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=556d4556-0806-4244-bd2c-51fb0c9413a5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.903613879Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e5e3576-d8d7-4149-8d65-57005d4bbef7 name=/runtime.v1.RuntimeService/Version
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.903685831Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e5e3576-d8d7-4149-8d65-57005d4bbef7 name=/runtime.v1.RuntimeService/Version
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.905046493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56fcf2be-f1eb-4efb-9ed0-0b1723ed9fec name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.905517147Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716212088905495164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56fcf2be-f1eb-4efb-9ed0-0b1723ed9fec name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.906179262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0787eae0-d330-4072-8325-7d94ec96935a name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.906232220Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0787eae0-d330-4072-8325-7d94ec96935a name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.906465894Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf740d9b5f06de9c1c7f18ff96ae4e526c6be4ceefd003d80cbf78d2a4a8de2e,PodSandboxId:85c1015ea36da8c46f104ff7ec479b35768c9d5a0cfc6b141e2355692dadb1b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716211886372227196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kubernetes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea85179fd0503aa9f2a864a7e621c5f12560217b135dee98c2f4cea9a4e5e59,PodSandboxId:109392450c01eea851e9af5e8cf4458df6fc6c142890a5c82c06a9767bc7d9d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716211730573499646,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4,PodSandboxId:cb6f21c242e20885a848e6e54b17b052b76485170e8d42e9306861f8b60b9773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211729633688588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583,PodSandboxId:901f35680bee52a4fdcd71f44cbdd95bb19dfa886f3ea6b532d85fee30bffbe9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211729610085205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-79
69-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef86504a6a21868d78111ee02e2dfffac4c0417e767d4026c693a1536b8b19d2,PodSandboxId:259c31fa9472ee17dcae28eb41a2503cf3c4fb8fb931b8a752f052c76b450a99,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17162117
27530589271,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b,PodSandboxId:ef9cc40406ad74740df8eabc268ef219b5ddbbd891b2bbcb11b6cb63e0559867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211727429880415,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334824a1ffd8bfbb3cb38a60c62bafa0a027c3c9f2d464b2aece3440327048f5,PodSandboxId:c2f00aa61309b82dd801eaee80f25d7456c5a1fb193b9cbd8f1ec4acdad1e5e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716211711072316006,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c2ce26b33d78866ea1447fa8f1385b,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2,PodSandboxId:0a5e941c6740dcbed3cd2281373ada1e41c0ae9be65f6fcbdf489717a76cdcf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211708078980294,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40d2be6b414dbb4aff49d9c9270179b8ef07ff0fd8215c35a69afa89c8b9a23,PodSandboxId:dfcd6dd7a8d331f382c0a93853e63559ce403346268f7c857edca59dfd91ff3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211708087228752,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8,PodSandboxId:1a02a71cebea30837cb4dfa8906ebe6d049ffc18862ce9371a6c5f6de47e2edf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211708025382815,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0dc1542ea21a6985bbc8673bdd0d29ddd6774c864aa059ea5d9aa8552ed47fa,PodSandboxId:e89b9ecab8ffc46ed6d073642eaca55de22955816f35f838c4127fc26356bd30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211707995210035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0787eae0-d330-4072-8325-7d94ec96935a name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.946898437Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=049e505f-ea6f-463e-b840-4e2a86eaab93 name=/runtime.v1.RuntimeService/Version
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.947013166Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=049e505f-ea6f-463e-b840-4e2a86eaab93 name=/runtime.v1.RuntimeService/Version
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.948230240Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b841473a-78fe-4e32-996d-fa2a686f3181 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.948694953Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716212088948672000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b841473a-78fe-4e32-996d-fa2a686f3181 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.949288223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7160d39-3f08-4cd7-bdea-0289200bca0d name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.949353263Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7160d39-3f08-4cd7-bdea-0289200bca0d name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.949577685Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf740d9b5f06de9c1c7f18ff96ae4e526c6be4ceefd003d80cbf78d2a4a8de2e,PodSandboxId:85c1015ea36da8c46f104ff7ec479b35768c9d5a0cfc6b141e2355692dadb1b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716211886372227196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kubernetes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea85179fd0503aa9f2a864a7e621c5f12560217b135dee98c2f4cea9a4e5e59,PodSandboxId:109392450c01eea851e9af5e8cf4458df6fc6c142890a5c82c06a9767bc7d9d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716211730573499646,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4,PodSandboxId:cb6f21c242e20885a848e6e54b17b052b76485170e8d42e9306861f8b60b9773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211729633688588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583,PodSandboxId:901f35680bee52a4fdcd71f44cbdd95bb19dfa886f3ea6b532d85fee30bffbe9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211729610085205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-79
69-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef86504a6a21868d78111ee02e2dfffac4c0417e767d4026c693a1536b8b19d2,PodSandboxId:259c31fa9472ee17dcae28eb41a2503cf3c4fb8fb931b8a752f052c76b450a99,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17162117
27530589271,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b,PodSandboxId:ef9cc40406ad74740df8eabc268ef219b5ddbbd891b2bbcb11b6cb63e0559867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211727429880415,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334824a1ffd8bfbb3cb38a60c62bafa0a027c3c9f2d464b2aece3440327048f5,PodSandboxId:c2f00aa61309b82dd801eaee80f25d7456c5a1fb193b9cbd8f1ec4acdad1e5e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716211711072316006,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c2ce26b33d78866ea1447fa8f1385b,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2,PodSandboxId:0a5e941c6740dcbed3cd2281373ada1e41c0ae9be65f6fcbdf489717a76cdcf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211708078980294,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40d2be6b414dbb4aff49d9c9270179b8ef07ff0fd8215c35a69afa89c8b9a23,PodSandboxId:dfcd6dd7a8d331f382c0a93853e63559ce403346268f7c857edca59dfd91ff3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211708087228752,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8,PodSandboxId:1a02a71cebea30837cb4dfa8906ebe6d049ffc18862ce9371a6c5f6de47e2edf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211708025382815,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0dc1542ea21a6985bbc8673bdd0d29ddd6774c864aa059ea5d9aa8552ed47fa,PodSandboxId:e89b9ecab8ffc46ed6d073642eaca55de22955816f35f838c4127fc26356bd30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211707995210035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7160d39-3f08-4cd7-bdea-0289200bca0d name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.993723234Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8cd94a7d-6d50-4dc2-bccd-8397ef2d2014 name=/runtime.v1.RuntimeService/Version
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.993815235Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8cd94a7d-6d50-4dc2-bccd-8397ef2d2014 name=/runtime.v1.RuntimeService/Version
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.995423796Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2f9dc59-2afb-4454-866f-6097c3b9860b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.997230631Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716212088997201360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2f9dc59-2afb-4454-866f-6097c3b9860b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.997766946Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8391d4bb-4379-4b43-bd7e-7b3324ab3da9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.997839681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8391d4bb-4379-4b43-bd7e-7b3324ab3da9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:34:48 ha-170194 crio[680]: time="2024-05-20 13:34:48.998137152Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf740d9b5f06de9c1c7f18ff96ae4e526c6be4ceefd003d80cbf78d2a4a8de2e,PodSandboxId:85c1015ea36da8c46f104ff7ec479b35768c9d5a0cfc6b141e2355692dadb1b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716211886372227196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kubernetes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea85179fd0503aa9f2a864a7e621c5f12560217b135dee98c2f4cea9a4e5e59,PodSandboxId:109392450c01eea851e9af5e8cf4458df6fc6c142890a5c82c06a9767bc7d9d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716211730573499646,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4,PodSandboxId:cb6f21c242e20885a848e6e54b17b052b76485170e8d42e9306861f8b60b9773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211729633688588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583,PodSandboxId:901f35680bee52a4fdcd71f44cbdd95bb19dfa886f3ea6b532d85fee30bffbe9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211729610085205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-79
69-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef86504a6a21868d78111ee02e2dfffac4c0417e767d4026c693a1536b8b19d2,PodSandboxId:259c31fa9472ee17dcae28eb41a2503cf3c4fb8fb931b8a752f052c76b450a99,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17162117
27530589271,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b,PodSandboxId:ef9cc40406ad74740df8eabc268ef219b5ddbbd891b2bbcb11b6cb63e0559867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211727429880415,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334824a1ffd8bfbb3cb38a60c62bafa0a027c3c9f2d464b2aece3440327048f5,PodSandboxId:c2f00aa61309b82dd801eaee80f25d7456c5a1fb193b9cbd8f1ec4acdad1e5e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716211711072316006,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c2ce26b33d78866ea1447fa8f1385b,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2,PodSandboxId:0a5e941c6740dcbed3cd2281373ada1e41c0ae9be65f6fcbdf489717a76cdcf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211708078980294,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40d2be6b414dbb4aff49d9c9270179b8ef07ff0fd8215c35a69afa89c8b9a23,PodSandboxId:dfcd6dd7a8d331f382c0a93853e63559ce403346268f7c857edca59dfd91ff3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211708087228752,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8,PodSandboxId:1a02a71cebea30837cb4dfa8906ebe6d049ffc18862ce9371a6c5f6de47e2edf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211708025382815,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0dc1542ea21a6985bbc8673bdd0d29ddd6774c864aa059ea5d9aa8552ed47fa,PodSandboxId:e89b9ecab8ffc46ed6d073642eaca55de22955816f35f838c4127fc26356bd30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211707995210035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8391d4bb-4379-4b43-bd7e-7b3324ab3da9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cf740d9b5f06d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   85c1015ea36da       busybox-fc5497c4f-kn5pb
	9ea85179fd050       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   109392450c01e       storage-provisioner
	d3c1362d9012c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   cb6f21c242e20       coredns-7db6d8ff4d-vk78q
	6bd28e2e55305       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   901f35680bee5       coredns-7db6d8ff4d-s28r6
	ef86504a6a218       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   259c31fa9472e       kindnet-cmd8x
	2ca782f6be5aa       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      6 minutes ago       Running             kube-proxy                0                   ef9cc40406ad7       kube-proxy-qth8f
	334824a1ffd8b       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   c2f00aa61309b       kube-vip-ha-170194
	e40d2be6b414d       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      6 minutes ago       Running             kube-apiserver            0                   dfcd6dd7a8d33       kube-apiserver-ha-170194
	bd7f5eac64d8e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   0a5e941c6740d       etcd-ha-170194
	d125c402bd4cb       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      6 minutes ago       Running             kube-scheduler            0                   1a02a71cebea3       kube-scheduler-ha-170194
	b0dc1542ea21a       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      6 minutes ago       Running             kube-controller-manager   0                   e89b9ecab8ffc       kube-controller-manager-ha-170194
	
	
	==> coredns [6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583] <==
	[INFO] 127.0.0.1:40383 - 4867 "HINFO IN 2061741283489635823.1468648125148225089. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010760865s
	[INFO] 10.244.0.4:41834 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001000301s
	[INFO] 10.244.0.4:48478 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.015646089s
	[INFO] 10.244.0.4:56808 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.008004652s
	[INFO] 10.244.0.4:39580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113846s
	[INFO] 10.244.0.4:34499 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153438s
	[INFO] 10.244.0.4:47635 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003467859s
	[INFO] 10.244.0.4:37386 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211396s
	[INFO] 10.244.0.4:37274 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116452s
	[INFO] 10.244.1.2:33488 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156093s
	[INFO] 10.244.1.2:44452 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130005s
	[INFO] 10.244.2.2:54953 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000216728s
	[INFO] 10.244.2.2:41118 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098892s
	[INFO] 10.244.0.4:52970 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000086695s
	[INFO] 10.244.0.4:33272 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104087s
	[INFO] 10.244.0.4:47074 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061643s
	[INFO] 10.244.1.2:46181 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125314s
	[INFO] 10.244.1.2:60651 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114425s
	[INFO] 10.244.2.2:39831 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092598s
	[INFO] 10.244.2.2:36745 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009346s
	[INFO] 10.244.0.4:58943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126961s
	[INFO] 10.244.0.4:51569 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093816s
	[INFO] 10.244.0.4:33771 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095037s
	[INFO] 10.244.1.2:51959 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152608s
	[INFO] 10.244.2.2:41273 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085919s
	
	
	==> coredns [d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4] <==
	[INFO] 10.244.0.4:60912 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096722s
	[INFO] 10.244.1.2:39690 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002116705s
	[INFO] 10.244.1.2:39465 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205731s
	[INFO] 10.244.1.2:48674 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104027s
	[INFO] 10.244.1.2:42811 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001662979s
	[INFO] 10.244.1.2:55637 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155358s
	[INFO] 10.244.1.2:34282 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105391s
	[INFO] 10.244.2.2:55675 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129728s
	[INFO] 10.244.2.2:33579 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001845622s
	[INFO] 10.244.2.2:38991 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087704s
	[INFO] 10.244.2.2:60832 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001368991s
	[INFO] 10.244.2.2:49213 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064756s
	[INFO] 10.244.2.2:54664 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073817s
	[INFO] 10.244.0.4:58834 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096728s
	[INFO] 10.244.1.2:58412 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081949s
	[INFO] 10.244.1.2:52492 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085342s
	[INFO] 10.244.2.2:34598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011731s
	[INFO] 10.244.2.2:59375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131389s
	[INFO] 10.244.0.4:33373 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000185564s
	[INFO] 10.244.1.2:38899 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131605s
	[INFO] 10.244.1.2:39420 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000251117s
	[INFO] 10.244.1.2:39569 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000142225s
	[INFO] 10.244.2.2:33399 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185075s
	[INFO] 10.244.2.2:48490 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100278s
	[INFO] 10.244.2.2:35988 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115036s
	
	
	==> describe nodes <==
	Name:               ha-170194
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170194
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=ha-170194
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T13_28_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:28:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170194
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:34:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:31:38 +0000   Mon, 20 May 2024 13:28:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:31:38 +0000   Mon, 20 May 2024 13:28:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:31:38 +0000   Mon, 20 May 2024 13:28:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:31:38 +0000   Mon, 20 May 2024 13:28:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-170194
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c0123e982bf4840b6eb6a3f175c7438
	  System UUID:                4c0123e9-82bf-4840-b6eb-6a3f175c7438
	  Boot ID:                    37123cd6-de29-4d66-9faf-c58bcb2e7628
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kn5pb              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 coredns-7db6d8ff4d-s28r6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m2s
	  kube-system                 coredns-7db6d8ff4d-vk78q             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m2s
	  kube-system                 etcd-ha-170194                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m15s
	  kube-system                 kindnet-cmd8x                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m3s
	  kube-system                 kube-apiserver-ha-170194             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-controller-manager-ha-170194    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-proxy-qth8f                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-scheduler-ha-170194             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-vip-ha-170194                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m1s   kube-proxy       
	  Normal  Starting                 6m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m15s  kubelet          Node ha-170194 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m15s  kubelet          Node ha-170194 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m15s  kubelet          Node ha-170194 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m3s   node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	  Normal  NodeReady                6m     kubelet          Node ha-170194 status is now: NodeReady
	  Normal  RegisteredNode           4m46s  node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	  Normal  RegisteredNode           3m30s  node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	
	
	Name:               ha-170194-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170194-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=ha-170194
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T13_29_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:29:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170194-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:32:18 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 20 May 2024 13:31:48 +0000   Mon, 20 May 2024 13:32:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 20 May 2024 13:31:48 +0000   Mon, 20 May 2024 13:32:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 20 May 2024 13:31:48 +0000   Mon, 20 May 2024 13:32:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 20 May 2024 13:31:48 +0000   Mon, 20 May 2024 13:32:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.155
	  Hostname:    ha-170194-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dcdee518e92c4c0ba5f3ba763f746ea2
	  System UUID:                dcdee518-e92c-4c0b-a5f3-ba763f746ea2
	  Boot ID:                    c436c0af-64d9-48ee-9d47-d67d9b728b14
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tmq2s                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 etcd-ha-170194-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m2s
	  kube-system                 kindnet-5mg44                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m4s
	  kube-system                 kube-apiserver-ha-170194-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-controller-manager-ha-170194-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-proxy-7ncvb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-scheduler-ha-170194-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-vip-ha-170194-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m4s (x8 over 5m4s)  kubelet          Node ha-170194-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m4s (x8 over 5m4s)  kubelet          Node ha-170194-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m4s (x7 over 5m4s)  kubelet          Node ha-170194-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m3s                 node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  RegisteredNode           4m46s                node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  RegisteredNode           3m30s                node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  NodeNotReady             110s                 node-controller  Node ha-170194-m02 status is now: NodeNotReady
	
	
	Name:               ha-170194-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170194-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=ha-170194
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T13_31_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:31:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170194-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:34:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:31:31 +0000   Mon, 20 May 2024 13:31:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:31:31 +0000   Mon, 20 May 2024 13:31:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:31:31 +0000   Mon, 20 May 2024 13:31:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:31:31 +0000   Mon, 20 May 2024 13:31:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    ha-170194-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 64924ff33ca44b9f8535eb50161a056c
	  System UUID:                64924ff3-3ca4-4b9f-8535-eb50161a056c
	  Boot ID:                    98d78edd-8ff8-4cb4-b546-ec91b16aa0c4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vr9tf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 etcd-ha-170194-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m46s
	  kube-system                 kindnet-q72lt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m48s
	  kube-system                 kube-apiserver-ha-170194-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-controller-manager-ha-170194-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-proxy-x79p4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-scheduler-ha-170194-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-vip-ha-170194-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m43s                  kube-proxy       
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-170194-m03 event: Registered Node ha-170194-m03 in Controller
	  Normal  NodeHasSufficientMemory  3m48s (x8 over 3m48s)  kubelet          Node ha-170194-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s (x8 over 3m48s)  kubelet          Node ha-170194-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s (x7 over 3m48s)  kubelet          Node ha-170194-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m46s                  node-controller  Node ha-170194-m03 event: Registered Node ha-170194-m03 in Controller
	  Normal  RegisteredNode           3m30s                  node-controller  Node ha-170194-m03 event: Registered Node ha-170194-m03 in Controller
	
	
	Name:               ha-170194-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170194-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=ha-170194
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T13_31_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:31:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170194-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:34:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:32:28 +0000   Mon, 20 May 2024 13:31:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:32:28 +0000   Mon, 20 May 2024 13:31:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:32:28 +0000   Mon, 20 May 2024 13:31:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:32:28 +0000   Mon, 20 May 2024 13:32:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.163
	  Hostname:    ha-170194-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 04786f3c085342e689c4ca279f442854
	  System UUID:                04786f3c-0853-42e6-89c4-ca279f442854
	  Boot ID:                    d9185916-82d4-4a95-9131-2ebf014960ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-98pk9       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m51s
	  kube-system                 kube-proxy-52pf8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m46s                  kube-proxy       
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal  NodeHasSufficientMemory  2m51s (x3 over 2m52s)  kubelet          Node ha-170194-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m51s (x3 over 2m52s)  kubelet          Node ha-170194-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m51s (x3 over 2m52s)  kubelet          Node ha-170194-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m50s                  node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal  RegisteredNode           2m48s                  node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-170194-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[May20 13:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051728] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037583] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[May20 13:28] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.729382] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.644635] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.658967] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.056188] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056574] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.149929] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.138520] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.255022] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +3.918021] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.231733] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.055898] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.968265] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.072694] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.206801] kauditd_printk_skb: 21 callbacks suppressed
	[May20 13:29] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2] <==
	{"level":"warn","ts":"2024-05-20T13:34:49.312337Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.323165Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.329132Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.337793Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.344402Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.348598Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.350527Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.352041Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.375203Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.386172Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.394356Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.39859Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.402059Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.412199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.412364Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.415691Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.420981Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.422684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.429835Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.438023Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.442764Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.452474Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.461993Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.470793Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:34:49.523224Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 13:34:49 up 6 min,  0 users,  load average: 0.54, 0.52, 0.25
	Linux ha-170194 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ef86504a6a21868d78111ee02e2dfffac4c0417e767d4026c693a1536b8b19d2] <==
	I0520 13:34:19.029724       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	I0520 13:34:29.037529       1 main.go:223] Handling node with IPs: map[192.168.39.92:{}]
	I0520 13:34:29.037551       1 main.go:227] handling current node
	I0520 13:34:29.037563       1 main.go:223] Handling node with IPs: map[192.168.39.155:{}]
	I0520 13:34:29.037568       1 main.go:250] Node ha-170194-m02 has CIDR [10.244.1.0/24] 
	I0520 13:34:29.037700       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I0520 13:34:29.037726       1 main.go:250] Node ha-170194-m03 has CIDR [10.244.2.0/24] 
	I0520 13:34:29.037820       1 main.go:223] Handling node with IPs: map[192.168.39.163:{}]
	I0520 13:34:29.037843       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	I0520 13:34:39.051587       1 main.go:223] Handling node with IPs: map[192.168.39.92:{}]
	I0520 13:34:39.051624       1 main.go:227] handling current node
	I0520 13:34:39.051635       1 main.go:223] Handling node with IPs: map[192.168.39.155:{}]
	I0520 13:34:39.051641       1 main.go:250] Node ha-170194-m02 has CIDR [10.244.1.0/24] 
	I0520 13:34:39.051741       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I0520 13:34:39.051758       1 main.go:250] Node ha-170194-m03 has CIDR [10.244.2.0/24] 
	I0520 13:34:39.051818       1 main.go:223] Handling node with IPs: map[192.168.39.163:{}]
	I0520 13:34:39.051835       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	I0520 13:34:49.063001       1 main.go:223] Handling node with IPs: map[192.168.39.92:{}]
	I0520 13:34:49.063088       1 main.go:227] handling current node
	I0520 13:34:49.063124       1 main.go:223] Handling node with IPs: map[192.168.39.155:{}]
	I0520 13:34:49.063143       1 main.go:250] Node ha-170194-m02 has CIDR [10.244.1.0/24] 
	I0520 13:34:49.063291       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I0520 13:34:49.063315       1 main.go:250] Node ha-170194-m03 has CIDR [10.244.2.0/24] 
	I0520 13:34:49.063372       1 main.go:223] Handling node with IPs: map[192.168.39.163:{}]
	I0520 13:34:49.063390       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e40d2be6b414dbb4aff49d9c9270179b8ef07ff0fd8215c35a69afa89c8b9a23] <==
	I0520 13:28:33.025295       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 13:28:34.336136       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 13:28:34.371650       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0520 13:28:34.387581       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 13:28:46.732023       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0520 13:28:46.983015       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0520 13:31:02.034696       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0520 13:31:02.035118       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0520 13:31:02.035007       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 30.154µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0520 13:31:02.036523       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0520 13:31:02.036670       1 timeout.go:142] post-timeout activity - time-elapsed: 2.126749ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0520 13:31:27.831506       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60936: use of closed network connection
	E0520 13:31:28.040648       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60962: use of closed network connection
	E0520 13:31:28.239126       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60992: use of closed network connection
	E0520 13:31:28.433777       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32776: use of closed network connection
	E0520 13:31:28.614842       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32796: use of closed network connection
	E0520 13:31:28.993178       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32842: use of closed network connection
	E0520 13:31:29.182707       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32858: use of closed network connection
	E0520 13:31:29.361088       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32882: use of closed network connection
	E0520 13:31:29.673009       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32910: use of closed network connection
	E0520 13:31:29.868652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32918: use of closed network connection
	E0520 13:31:30.079857       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32934: use of closed network connection
	E0520 13:31:30.271882       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32948: use of closed network connection
	E0520 13:31:30.453468       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32964: use of closed network connection
	E0520 13:31:30.632204       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32984: use of closed network connection
	
	
	==> kube-controller-manager [b0dc1542ea21a6985bbc8673bdd0d29ddd6774c864aa059ea5d9aa8552ed47fa] <==
	I0520 13:31:01.275627       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-170194-m03"
	I0520 13:31:23.271341       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.578576ms"
	I0520 13:31:23.304580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.007878ms"
	I0520 13:31:23.304972       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="268.754µs"
	I0520 13:31:23.311369       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.747µs"
	I0520 13:31:23.455290       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.439314ms"
	I0520 13:31:23.715172       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="256.9714ms"
	I0520 13:31:23.715260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.796µs"
	I0520 13:31:23.752201       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.784284ms"
	I0520 13:31:23.754347       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.648µs"
	I0520 13:31:24.274453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.539µs"
	I0520 13:31:26.989382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.825873ms"
	I0520 13:31:26.989510       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.06µs"
	I0520 13:31:27.054021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.119458ms"
	I0520 13:31:27.056157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.965µs"
	I0520 13:31:27.352759       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.205088ms"
	I0520 13:31:27.352987       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.183µs"
	E0520 13:31:57.804237       1 certificate_controller.go:146] Sync csr-2hqlq failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2hqlq": the object has been modified; please apply your changes to the latest version and try again
	I0520 13:31:58.120165       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-170194-m04\" does not exist"
	I0520 13:31:58.167325       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-170194-m04" podCIDRs=["10.244.3.0/24"]
	I0520 13:32:01.303697       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-170194-m04"
	I0520 13:32:08.228987       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-170194-m04"
	I0520 13:32:59.762786       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-170194-m04"
	I0520 13:32:59.893315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.632435ms"
	I0520 13:32:59.893524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.369µs"
	
	
	==> kube-proxy [2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b] <==
	I0520 13:28:47.863207       1 server_linux.go:69] "Using iptables proxy"
	I0520 13:28:47.879661       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.92"]
	I0520 13:28:47.984212       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 13:28:47.984972       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 13:28:47.985030       1 server_linux.go:165] "Using iptables Proxier"
	I0520 13:28:47.989343       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 13:28:47.989639       1 server.go:872] "Version info" version="v1.30.1"
	I0520 13:28:47.989658       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:28:47.992466       1 config.go:192] "Starting service config controller"
	I0520 13:28:47.992490       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 13:28:47.993825       1 config.go:101] "Starting endpoint slice config controller"
	I0520 13:28:47.993844       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 13:28:47.997010       1 config.go:319] "Starting node config controller"
	I0520 13:28:47.998321       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 13:28:48.092782       1 shared_informer.go:320] Caches are synced for service config
	I0520 13:28:48.098009       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 13:28:48.098469       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8] <==
	W0520 13:28:32.301182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 13:28:32.301302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 13:28:32.327409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 13:28:32.327569       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 13:28:32.466501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 13:28:32.466603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 13:28:32.582560       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 13:28:32.582686       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0520 13:28:35.451216       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0520 13:31:01.251895       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-x79p4\": pod kube-proxy-x79p4 is already assigned to node \"ha-170194-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-x79p4" node="ha-170194-m03"
	E0520 13:31:01.252292       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 20b12a4a-7f86-4521-9711-7b7efcf74995(kube-system/kube-proxy-x79p4) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-x79p4"
	E0520 13:31:01.252358       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-x79p4\": pod kube-proxy-x79p4 is already assigned to node \"ha-170194-m03\"" pod="kube-system/kube-proxy-x79p4"
	I0520 13:31:01.252425       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-x79p4" node="ha-170194-m03"
	E0520 13:31:23.276303       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kn5pb\": pod busybox-fc5497c4f-kn5pb is already assigned to node \"ha-170194\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-kn5pb" node="ha-170194"
	E0520 13:31:23.276385       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bc78b16d-ff4a-4bb6-9a1e-62f31641b442(default/busybox-fc5497c4f-kn5pb) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-kn5pb"
	E0520 13:31:23.276417       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kn5pb\": pod busybox-fc5497c4f-kn5pb is already assigned to node \"ha-170194\"" pod="default/busybox-fc5497c4f-kn5pb"
	I0520 13:31:23.276437       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-kn5pb" node="ha-170194"
	E0520 13:31:58.307794       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5lzhk\": pod kube-proxy-5lzhk is already assigned to node \"ha-170194-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5lzhk" node="ha-170194-m04"
	E0520 13:31:58.307956       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9038c429-a368-45d7-9a3c-cdc8e614b0bb(kube-system/kube-proxy-5lzhk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5lzhk"
	E0520 13:31:58.308020       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5lzhk\": pod kube-proxy-5lzhk is already assigned to node \"ha-170194-m04\"" pod="kube-system/kube-proxy-5lzhk"
	I0520 13:31:58.308071       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5lzhk" node="ha-170194-m04"
	E0520 13:31:58.318254       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vbq7d\": pod kindnet-vbq7d is already assigned to node \"ha-170194-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vbq7d" node="ha-170194-m04"
	E0520 13:31:58.318443       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 8f46515c-2976-4142-8053-d41e78ea4f8b(kube-system/kindnet-vbq7d) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-vbq7d"
	E0520 13:31:58.318571       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vbq7d\": pod kindnet-vbq7d is already assigned to node \"ha-170194-m04\"" pod="kube-system/kindnet-vbq7d"
	I0520 13:31:58.318665       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vbq7d" node="ha-170194-m04"
	
	
	==> kubelet <==
	May 20 13:30:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:31:23 ha-170194 kubelet[1373]: I0520 13:31:23.260825    1373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=155.260769877 podStartE2EDuration="2m35.260769877s" podCreationTimestamp="2024-05-20 13:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 13:28:51.365877737 +0000 UTC m=+17.262633058" watchObservedRunningTime="2024-05-20 13:31:23.260769877 +0000 UTC m=+169.157525217"
	May 20 13:31:23 ha-170194 kubelet[1373]: I0520 13:31:23.261512    1373 topology_manager.go:215] "Topology Admit Handler" podUID="bc78b16d-ff4a-4bb6-9a1e-62f31641b442" podNamespace="default" podName="busybox-fc5497c4f-kn5pb"
	May 20 13:31:23 ha-170194 kubelet[1373]: I0520 13:31:23.357266    1373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnd8s\" (UniqueName: \"kubernetes.io/projected/bc78b16d-ff4a-4bb6-9a1e-62f31641b442-kube-api-access-cnd8s\") pod \"busybox-fc5497c4f-kn5pb\" (UID: \"bc78b16d-ff4a-4bb6-9a1e-62f31641b442\") " pod="default/busybox-fc5497c4f-kn5pb"
	May 20 13:31:26 ha-170194 kubelet[1373]: I0520 13:31:26.940015    1373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-kn5pb" podStartSLOduration=1.413361048 podStartE2EDuration="3.939887456s" podCreationTimestamp="2024-05-20 13:31:23 +0000 UTC" firstStartedPulling="2024-05-20 13:31:23.826622828 +0000 UTC m=+169.723378129" lastFinishedPulling="2024-05-20 13:31:26.353149221 +0000 UTC m=+172.249904537" observedRunningTime="2024-05-20 13:31:26.939197859 +0000 UTC m=+172.835953181" watchObservedRunningTime="2024-05-20 13:31:26.939887456 +0000 UTC m=+172.836642778"
	May 20 13:31:34 ha-170194 kubelet[1373]: E0520 13:31:34.276868    1373 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:31:34 ha-170194 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:31:34 ha-170194 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:31:34 ha-170194 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:31:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:32:34 ha-170194 kubelet[1373]: E0520 13:32:34.276768    1373 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:32:34 ha-170194 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:32:34 ha-170194 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:32:34 ha-170194 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:32:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:33:34 ha-170194 kubelet[1373]: E0520 13:33:34.276790    1373 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:33:34 ha-170194 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:33:34 ha-170194 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:33:34 ha-170194 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:33:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:34:34 ha-170194 kubelet[1373]: E0520 13:34:34.277139    1373 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:34:34 ha-170194 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:34:34 ha-170194 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:34:34 ha-170194 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:34:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-170194 -n ha-170194
helpers_test.go:261: (dbg) Run:  kubectl --context ha-170194 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (61.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr: exit status 3 (3.210671406s)

                                                
                                                
-- stdout --
	ha-170194
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-170194-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:34:54.129741  628969 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:34:54.129842  628969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:34:54.129849  628969 out.go:304] Setting ErrFile to fd 2...
	I0520 13:34:54.129853  628969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:34:54.130058  628969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:34:54.130221  628969 out.go:298] Setting JSON to false
	I0520 13:34:54.130246  628969 mustload.go:65] Loading cluster: ha-170194
	I0520 13:34:54.130290  628969 notify.go:220] Checking for updates...
	I0520 13:34:54.130609  628969 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:34:54.130623  628969 status.go:255] checking status of ha-170194 ...
	I0520 13:34:54.131003  628969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:54.131112  628969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:54.149794  628969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45885
	I0520 13:34:54.150329  628969 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:54.150871  628969 main.go:141] libmachine: Using API Version  1
	I0520 13:34:54.150896  628969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:54.151365  628969 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:54.151589  628969 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:34:54.153616  628969 status.go:330] ha-170194 host status = "Running" (err=<nil>)
	I0520 13:34:54.153645  628969 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:34:54.154052  628969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:54.154109  628969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:54.169223  628969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I0520 13:34:54.169686  628969 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:54.170188  628969 main.go:141] libmachine: Using API Version  1
	I0520 13:34:54.170218  628969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:54.170639  628969 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:54.170833  628969 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:34:54.173654  628969 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:34:54.174150  628969 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:34:54.174182  628969 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:34:54.174320  628969 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:34:54.174600  628969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:54.174659  628969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:54.193375  628969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I0520 13:34:54.193825  628969 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:54.194348  628969 main.go:141] libmachine: Using API Version  1
	I0520 13:34:54.194381  628969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:54.194698  628969 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:54.194879  628969 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:34:54.195110  628969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:34:54.195148  628969 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:34:54.198230  628969 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:34:54.198669  628969 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:34:54.198693  628969 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:34:54.198828  628969 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:34:54.198985  628969 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:34:54.199134  628969 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:34:54.199301  628969 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:34:54.286091  628969 ssh_runner.go:195] Run: systemctl --version
	I0520 13:34:54.294396  628969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:34:54.314354  628969 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:34:54.314398  628969 api_server.go:166] Checking apiserver status ...
	I0520 13:34:54.314438  628969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:34:54.334819  628969 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0520 13:34:54.345661  628969 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:34:54.346088  628969 ssh_runner.go:195] Run: ls
	I0520 13:34:54.352030  628969 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:34:54.357476  628969 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:34:54.357502  628969 status.go:422] ha-170194 apiserver status = Running (err=<nil>)
	I0520 13:34:54.357512  628969 status.go:257] ha-170194 status: &{Name:ha-170194 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:34:54.357528  628969 status.go:255] checking status of ha-170194-m02 ...
	I0520 13:34:54.357829  628969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:54.357863  628969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:54.373588  628969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34811
	I0520 13:34:54.374034  628969 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:54.374485  628969 main.go:141] libmachine: Using API Version  1
	I0520 13:34:54.374505  628969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:54.374824  628969 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:54.375096  628969 main.go:141] libmachine: (ha-170194-m02) Calling .GetState
	I0520 13:34:54.376971  628969 status.go:330] ha-170194-m02 host status = "Running" (err=<nil>)
	I0520 13:34:54.376987  628969 host.go:66] Checking if "ha-170194-m02" exists ...
	I0520 13:34:54.377332  628969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:54.377374  628969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:54.392750  628969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33909
	I0520 13:34:54.393211  628969 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:54.393692  628969 main.go:141] libmachine: Using API Version  1
	I0520 13:34:54.393721  628969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:54.394134  628969 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:54.394328  628969 main.go:141] libmachine: (ha-170194-m02) Calling .GetIP
	I0520 13:34:54.397598  628969 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:34:54.398121  628969 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:34:54.398148  628969 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:34:54.398324  628969 host.go:66] Checking if "ha-170194-m02" exists ...
	I0520 13:34:54.398627  628969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:54.398674  628969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:54.414449  628969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41249
	I0520 13:34:54.414903  628969 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:54.415374  628969 main.go:141] libmachine: Using API Version  1
	I0520 13:34:54.415400  628969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:54.415772  628969 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:54.416006  628969 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:34:54.416203  628969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:34:54.416223  628969 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:34:54.419441  628969 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:34:54.419963  628969 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:34:54.419989  628969 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:34:54.420211  628969 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:34:54.420364  628969 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:34:54.420467  628969 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:34:54.420608  628969 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	W0520 13:34:56.941587  628969 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.155:22: connect: no route to host
	W0520 13:34:56.941687  628969 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	E0520 13:34:56.941705  628969 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:34:56.941714  628969 status.go:257] ha-170194-m02 status: &{Name:ha-170194-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 13:34:56.941735  628969 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:34:56.941743  628969 status.go:255] checking status of ha-170194-m03 ...
	I0520 13:34:56.942154  628969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:56.942192  628969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:56.957206  628969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44317
	I0520 13:34:56.957874  628969 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:56.958396  628969 main.go:141] libmachine: Using API Version  1
	I0520 13:34:56.958421  628969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:56.958737  628969 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:56.958961  628969 main.go:141] libmachine: (ha-170194-m03) Calling .GetState
	I0520 13:34:56.960553  628969 status.go:330] ha-170194-m03 host status = "Running" (err=<nil>)
	I0520 13:34:56.960568  628969 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:34:56.960838  628969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:56.960864  628969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:56.976484  628969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39977
	I0520 13:34:56.976960  628969 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:56.977462  628969 main.go:141] libmachine: Using API Version  1
	I0520 13:34:56.977489  628969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:56.977802  628969 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:56.978010  628969 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:34:56.980814  628969 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:34:56.981382  628969 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:34:56.981417  628969 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:34:56.981559  628969 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:34:56.981955  628969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:56.982003  628969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:56.997049  628969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43459
	I0520 13:34:56.997551  628969 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:56.998208  628969 main.go:141] libmachine: Using API Version  1
	I0520 13:34:56.998233  628969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:56.998570  628969 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:56.998799  628969 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:34:56.998991  628969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:34:56.999012  628969 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:34:57.002534  628969 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:34:57.002958  628969 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:34:57.002988  628969 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:34:57.003159  628969 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:34:57.003348  628969 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:34:57.003522  628969 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:34:57.003663  628969 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:34:57.085794  628969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:34:57.104076  628969 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:34:57.104111  628969 api_server.go:166] Checking apiserver status ...
	I0520 13:34:57.104143  628969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:34:57.123340  628969 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0520 13:34:57.136339  628969 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:34:57.136404  628969 ssh_runner.go:195] Run: ls
	I0520 13:34:57.140836  628969 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:34:57.145223  628969 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:34:57.145270  628969 status.go:422] ha-170194-m03 apiserver status = Running (err=<nil>)
	I0520 13:34:57.145282  628969 status.go:257] ha-170194-m03 status: &{Name:ha-170194-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:34:57.145307  628969 status.go:255] checking status of ha-170194-m04 ...
	I0520 13:34:57.145596  628969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:57.145627  628969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:57.161028  628969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46431
	I0520 13:34:57.161565  628969 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:57.162069  628969 main.go:141] libmachine: Using API Version  1
	I0520 13:34:57.162099  628969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:57.162430  628969 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:57.162643  628969 main.go:141] libmachine: (ha-170194-m04) Calling .GetState
	I0520 13:34:57.164337  628969 status.go:330] ha-170194-m04 host status = "Running" (err=<nil>)
	I0520 13:34:57.164359  628969 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:34:57.164676  628969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:57.164703  628969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:57.179806  628969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40521
	I0520 13:34:57.180252  628969 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:57.180709  628969 main.go:141] libmachine: Using API Version  1
	I0520 13:34:57.180734  628969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:57.181097  628969 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:57.181314  628969 main.go:141] libmachine: (ha-170194-m04) Calling .GetIP
	I0520 13:34:57.184216  628969 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:34:57.184681  628969 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:34:57.184717  628969 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:34:57.184846  628969 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:34:57.185221  628969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:57.185302  628969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:57.199854  628969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41951
	I0520 13:34:57.200324  628969 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:57.200759  628969 main.go:141] libmachine: Using API Version  1
	I0520 13:34:57.200781  628969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:57.201112  628969 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:57.201318  628969 main.go:141] libmachine: (ha-170194-m04) Calling .DriverName
	I0520 13:34:57.201495  628969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:34:57.201523  628969 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHHostname
	I0520 13:34:57.204207  628969 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:34:57.204652  628969 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:34:57.204680  628969 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:34:57.204820  628969 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHPort
	I0520 13:34:57.205030  628969 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHKeyPath
	I0520 13:34:57.205189  628969 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHUsername
	I0520 13:34:57.205365  628969 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m04/id_rsa Username:docker}
	I0520 13:34:57.283764  628969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:34:57.297064  628969 status.go:257] ha-170194-m04 status: &{Name:ha-170194-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr: exit status 3 (5.208769303s)

                                                
                                                
-- stdout --
	ha-170194
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-170194-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:34:58.277463  629053 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:34:58.277715  629053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:34:58.277725  629053 out.go:304] Setting ErrFile to fd 2...
	I0520 13:34:58.277729  629053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:34:58.277923  629053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:34:58.278079  629053 out.go:298] Setting JSON to false
	I0520 13:34:58.278107  629053 mustload.go:65] Loading cluster: ha-170194
	I0520 13:34:58.278218  629053 notify.go:220] Checking for updates...
	I0520 13:34:58.278455  629053 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:34:58.278470  629053 status.go:255] checking status of ha-170194 ...
	I0520 13:34:58.278823  629053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:58.278892  629053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:58.296709  629053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42419
	I0520 13:34:58.297241  629053 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:58.297896  629053 main.go:141] libmachine: Using API Version  1
	I0520 13:34:58.297927  629053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:58.298387  629053 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:58.298598  629053 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:34:58.300265  629053 status.go:330] ha-170194 host status = "Running" (err=<nil>)
	I0520 13:34:58.300289  629053 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:34:58.300638  629053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:58.300685  629053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:58.315877  629053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41307
	I0520 13:34:58.316362  629053 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:58.317002  629053 main.go:141] libmachine: Using API Version  1
	I0520 13:34:58.317035  629053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:58.317527  629053 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:58.317783  629053 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:34:58.320923  629053 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:34:58.321475  629053 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:34:58.321509  629053 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:34:58.321679  629053 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:34:58.322008  629053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:58.322052  629053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:58.338516  629053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36539
	I0520 13:34:58.339133  629053 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:58.340516  629053 main.go:141] libmachine: Using API Version  1
	I0520 13:34:58.340550  629053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:58.340973  629053 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:58.341273  629053 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:34:58.341504  629053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:34:58.341545  629053 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:34:58.344721  629053 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:34:58.345097  629053 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:34:58.345128  629053 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:34:58.345431  629053 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:34:58.345565  629053 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:34:58.345787  629053 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:34:58.345944  629053 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:34:58.428704  629053 ssh_runner.go:195] Run: systemctl --version
	I0520 13:34:58.436294  629053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:34:58.459880  629053 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:34:58.460066  629053 api_server.go:166] Checking apiserver status ...
	I0520 13:34:58.460118  629053 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:34:58.483228  629053 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0520 13:34:58.492717  629053 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:34:58.492774  629053 ssh_runner.go:195] Run: ls
	I0520 13:34:58.497165  629053 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:34:58.503008  629053 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:34:58.503031  629053 status.go:422] ha-170194 apiserver status = Running (err=<nil>)
	I0520 13:34:58.503043  629053 status.go:257] ha-170194 status: &{Name:ha-170194 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:34:58.503061  629053 status.go:255] checking status of ha-170194-m02 ...
	I0520 13:34:58.503368  629053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:58.503404  629053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:58.521152  629053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40497
	I0520 13:34:58.521730  629053 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:58.522287  629053 main.go:141] libmachine: Using API Version  1
	I0520 13:34:58.522309  629053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:58.522657  629053 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:58.522872  629053 main.go:141] libmachine: (ha-170194-m02) Calling .GetState
	I0520 13:34:58.524461  629053 status.go:330] ha-170194-m02 host status = "Running" (err=<nil>)
	I0520 13:34:58.524492  629053 host.go:66] Checking if "ha-170194-m02" exists ...
	I0520 13:34:58.524791  629053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:58.524826  629053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:58.540695  629053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35109
	I0520 13:34:58.541158  629053 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:58.541789  629053 main.go:141] libmachine: Using API Version  1
	I0520 13:34:58.541823  629053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:58.542188  629053 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:58.542451  629053 main.go:141] libmachine: (ha-170194-m02) Calling .GetIP
	I0520 13:34:58.545976  629053 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:34:58.546541  629053 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:34:58.546568  629053 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:34:58.546750  629053 host.go:66] Checking if "ha-170194-m02" exists ...
	I0520 13:34:58.547152  629053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:34:58.547196  629053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:34:58.563038  629053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35419
	I0520 13:34:58.563424  629053 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:34:58.563919  629053 main.go:141] libmachine: Using API Version  1
	I0520 13:34:58.563940  629053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:34:58.564290  629053 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:34:58.564472  629053 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:34:58.564644  629053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:34:58.564661  629053 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:34:58.568860  629053 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:34:58.569269  629053 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:34:58.569290  629053 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:34:58.569458  629053 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:34:58.569654  629053 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:34:58.569811  629053 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:34:58.569940  629053 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	W0520 13:35:00.017560  629053 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:35:00.017646  629053 retry.go:31] will retry after 326.676305ms: dial tcp 192.168.39.155:22: connect: no route to host
	W0520 13:35:03.085604  629053 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.155:22: connect: no route to host
	W0520 13:35:03.085693  629053 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	E0520 13:35:03.085711  629053 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:35:03.085722  629053 status.go:257] ha-170194-m02 status: &{Name:ha-170194-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 13:35:03.085747  629053 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:35:03.085757  629053 status.go:255] checking status of ha-170194-m03 ...
	I0520 13:35:03.086170  629053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:03.086210  629053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:03.102193  629053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35111
	I0520 13:35:03.102675  629053 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:03.103269  629053 main.go:141] libmachine: Using API Version  1
	I0520 13:35:03.103296  629053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:03.103718  629053 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:03.103966  629053 main.go:141] libmachine: (ha-170194-m03) Calling .GetState
	I0520 13:35:03.105805  629053 status.go:330] ha-170194-m03 host status = "Running" (err=<nil>)
	I0520 13:35:03.105826  629053 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:35:03.106238  629053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:03.106292  629053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:03.122234  629053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46877
	I0520 13:35:03.122801  629053 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:03.123305  629053 main.go:141] libmachine: Using API Version  1
	I0520 13:35:03.123326  629053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:03.123643  629053 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:03.123850  629053 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:35:03.126552  629053 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:03.127009  629053 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:35:03.127037  629053 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:03.127217  629053 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:35:03.127502  629053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:03.127538  629053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:03.142818  629053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41343
	I0520 13:35:03.143254  629053 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:03.143743  629053 main.go:141] libmachine: Using API Version  1
	I0520 13:35:03.143765  629053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:03.144132  629053 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:03.144317  629053 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:35:03.144516  629053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:03.144544  629053 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:35:03.147707  629053 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:03.148165  629053 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:35:03.148196  629053 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:03.148366  629053 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:35:03.148542  629053 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:35:03.148704  629053 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:35:03.148821  629053 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:35:03.228417  629053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:03.243317  629053 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:35:03.243357  629053 api_server.go:166] Checking apiserver status ...
	I0520 13:35:03.243408  629053 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:35:03.256502  629053 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0520 13:35:03.266289  629053 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:35:03.266356  629053 ssh_runner.go:195] Run: ls
	I0520 13:35:03.271121  629053 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:35:03.277493  629053 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:35:03.277525  629053 status.go:422] ha-170194-m03 apiserver status = Running (err=<nil>)
	I0520 13:35:03.277536  629053 status.go:257] ha-170194-m03 status: &{Name:ha-170194-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:35:03.277559  629053 status.go:255] checking status of ha-170194-m04 ...
	I0520 13:35:03.277865  629053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:03.277956  629053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:03.298002  629053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35731
	I0520 13:35:03.298512  629053 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:03.299469  629053 main.go:141] libmachine: Using API Version  1
	I0520 13:35:03.299505  629053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:03.300540  629053 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:03.300765  629053 main.go:141] libmachine: (ha-170194-m04) Calling .GetState
	I0520 13:35:03.302517  629053 status.go:330] ha-170194-m04 host status = "Running" (err=<nil>)
	I0520 13:35:03.302550  629053 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:35:03.302869  629053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:03.302915  629053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:03.317885  629053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36919
	I0520 13:35:03.318472  629053 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:03.318970  629053 main.go:141] libmachine: Using API Version  1
	I0520 13:35:03.318994  629053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:03.319279  629053 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:03.319486  629053 main.go:141] libmachine: (ha-170194-m04) Calling .GetIP
	I0520 13:35:03.322200  629053 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:03.322692  629053 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:35:03.322718  629053 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:03.322852  629053 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:35:03.323222  629053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:03.323262  629053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:03.339416  629053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36343
	I0520 13:35:03.339825  629053 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:03.340369  629053 main.go:141] libmachine: Using API Version  1
	I0520 13:35:03.340396  629053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:03.340743  629053 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:03.340959  629053 main.go:141] libmachine: (ha-170194-m04) Calling .DriverName
	I0520 13:35:03.341145  629053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:03.341170  629053 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHHostname
	I0520 13:35:03.344141  629053 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:03.344747  629053 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:35:03.344768  629053 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:03.344937  629053 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHPort
	I0520 13:35:03.345086  629053 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHKeyPath
	I0520 13:35:03.345271  629053 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHUsername
	I0520 13:35:03.345408  629053 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m04/id_rsa Username:docker}
	I0520 13:35:03.425764  629053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:03.439648  629053 status.go:257] ha-170194-m04 status: &{Name:ha-170194-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr: exit status 3 (5.211846371s)

                                                
                                                
-- stdout --
	ha-170194
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-170194-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:35:04.416218  629169 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:35:04.416501  629169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:35:04.416511  629169 out.go:304] Setting ErrFile to fd 2...
	I0520 13:35:04.416516  629169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:35:04.416755  629169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:35:04.417002  629169 out.go:298] Setting JSON to false
	I0520 13:35:04.417037  629169 mustload.go:65] Loading cluster: ha-170194
	I0520 13:35:04.417162  629169 notify.go:220] Checking for updates...
	I0520 13:35:04.417621  629169 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:35:04.417643  629169 status.go:255] checking status of ha-170194 ...
	I0520 13:35:04.418117  629169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:04.418179  629169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:04.435885  629169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43061
	I0520 13:35:04.436363  629169 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:04.436950  629169 main.go:141] libmachine: Using API Version  1
	I0520 13:35:04.436995  629169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:04.437390  629169 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:04.437622  629169 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:35:04.439101  629169 status.go:330] ha-170194 host status = "Running" (err=<nil>)
	I0520 13:35:04.439121  629169 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:35:04.439400  629169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:04.439433  629169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:04.454827  629169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40651
	I0520 13:35:04.455277  629169 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:04.455918  629169 main.go:141] libmachine: Using API Version  1
	I0520 13:35:04.455948  629169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:04.456297  629169 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:04.456486  629169 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:35:04.458929  629169 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:04.459383  629169 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:35:04.459410  629169 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:04.459533  629169 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:35:04.459861  629169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:04.459914  629169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:04.475444  629169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32919
	I0520 13:35:04.475887  629169 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:04.476330  629169 main.go:141] libmachine: Using API Version  1
	I0520 13:35:04.476354  629169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:04.476707  629169 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:04.476949  629169 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:35:04.477154  629169 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:04.477180  629169 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:35:04.480616  629169 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:04.480995  629169 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:35:04.481031  629169 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:04.481292  629169 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:35:04.481489  629169 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:35:04.481639  629169 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:35:04.481771  629169 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:35:04.560426  629169 ssh_runner.go:195] Run: systemctl --version
	I0520 13:35:04.566681  629169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:04.581545  629169 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:35:04.581586  629169 api_server.go:166] Checking apiserver status ...
	I0520 13:35:04.581628  629169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:35:04.595033  629169 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0520 13:35:04.603808  629169 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:35:04.603855  629169 ssh_runner.go:195] Run: ls
	I0520 13:35:04.607902  629169 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:35:04.614153  629169 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:35:04.614175  629169 status.go:422] ha-170194 apiserver status = Running (err=<nil>)
	I0520 13:35:04.614185  629169 status.go:257] ha-170194 status: &{Name:ha-170194 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:35:04.614203  629169 status.go:255] checking status of ha-170194-m02 ...
	I0520 13:35:04.614484  629169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:04.614518  629169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:04.630039  629169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43613
	I0520 13:35:04.630520  629169 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:04.631091  629169 main.go:141] libmachine: Using API Version  1
	I0520 13:35:04.631118  629169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:04.631534  629169 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:04.631781  629169 main.go:141] libmachine: (ha-170194-m02) Calling .GetState
	I0520 13:35:04.633613  629169 status.go:330] ha-170194-m02 host status = "Running" (err=<nil>)
	I0520 13:35:04.633632  629169 host.go:66] Checking if "ha-170194-m02" exists ...
	I0520 13:35:04.633932  629169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:04.633975  629169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:04.650819  629169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I0520 13:35:04.651224  629169 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:04.651648  629169 main.go:141] libmachine: Using API Version  1
	I0520 13:35:04.651672  629169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:04.652004  629169 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:04.652208  629169 main.go:141] libmachine: (ha-170194-m02) Calling .GetIP
	I0520 13:35:04.655066  629169 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:35:04.655434  629169 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:35:04.655457  629169 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:35:04.655604  629169 host.go:66] Checking if "ha-170194-m02" exists ...
	I0520 13:35:04.655882  629169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:04.655916  629169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:04.670923  629169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33891
	I0520 13:35:04.671378  629169 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:04.671873  629169 main.go:141] libmachine: Using API Version  1
	I0520 13:35:04.671899  629169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:04.672182  629169 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:04.672357  629169 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:35:04.672549  629169 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:04.672577  629169 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:35:04.675414  629169 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:35:04.675948  629169 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:35:04.675973  629169 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:35:04.676173  629169 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:35:04.676373  629169 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:35:04.676530  629169 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:35:04.676681  629169 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	W0520 13:35:06.157587  629169 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:35:06.157705  629169 retry.go:31] will retry after 165.48175ms: dial tcp 192.168.39.155:22: connect: no route to host
	W0520 13:35:09.229622  629169 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.155:22: connect: no route to host
	W0520 13:35:09.229748  629169 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	E0520 13:35:09.229773  629169 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:35:09.229781  629169 status.go:257] ha-170194-m02 status: &{Name:ha-170194-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 13:35:09.229801  629169 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:35:09.229809  629169 status.go:255] checking status of ha-170194-m03 ...
	I0520 13:35:09.230124  629169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:09.230168  629169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:09.246672  629169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33715
	I0520 13:35:09.247204  629169 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:09.247852  629169 main.go:141] libmachine: Using API Version  1
	I0520 13:35:09.247886  629169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:09.248234  629169 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:09.248453  629169 main.go:141] libmachine: (ha-170194-m03) Calling .GetState
	I0520 13:35:09.250176  629169 status.go:330] ha-170194-m03 host status = "Running" (err=<nil>)
	I0520 13:35:09.250200  629169 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:35:09.250488  629169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:09.250530  629169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:09.266124  629169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33537
	I0520 13:35:09.266633  629169 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:09.267159  629169 main.go:141] libmachine: Using API Version  1
	I0520 13:35:09.267194  629169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:09.267554  629169 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:09.267782  629169 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:35:09.270569  629169 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:09.271061  629169 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:35:09.271087  629169 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:09.271223  629169 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:35:09.271643  629169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:09.271694  629169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:09.286713  629169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44183
	I0520 13:35:09.287112  629169 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:09.287701  629169 main.go:141] libmachine: Using API Version  1
	I0520 13:35:09.287728  629169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:09.288066  629169 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:09.288273  629169 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:35:09.288613  629169 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:09.288640  629169 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:35:09.291895  629169 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:09.292367  629169 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:35:09.292394  629169 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:09.292575  629169 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:35:09.292819  629169 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:35:09.292986  629169 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:35:09.293094  629169 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:35:09.368657  629169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:09.383428  629169 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:35:09.383470  629169 api_server.go:166] Checking apiserver status ...
	I0520 13:35:09.383519  629169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:35:09.396435  629169 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0520 13:35:09.406312  629169 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:35:09.406382  629169 ssh_runner.go:195] Run: ls
	I0520 13:35:09.412813  629169 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:35:09.417701  629169 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:35:09.419268  629169 status.go:422] ha-170194-m03 apiserver status = Running (err=<nil>)
	I0520 13:35:09.419283  629169 status.go:257] ha-170194-m03 status: &{Name:ha-170194-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:35:09.419310  629169 status.go:255] checking status of ha-170194-m04 ...
	I0520 13:35:09.419639  629169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:09.419683  629169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:09.436160  629169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46183
	I0520 13:35:09.436611  629169 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:09.437115  629169 main.go:141] libmachine: Using API Version  1
	I0520 13:35:09.437140  629169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:09.437529  629169 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:09.437721  629169 main.go:141] libmachine: (ha-170194-m04) Calling .GetState
	I0520 13:35:09.439163  629169 status.go:330] ha-170194-m04 host status = "Running" (err=<nil>)
	I0520 13:35:09.439184  629169 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:35:09.439569  629169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:09.439616  629169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:09.455608  629169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44139
	I0520 13:35:09.456154  629169 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:09.456725  629169 main.go:141] libmachine: Using API Version  1
	I0520 13:35:09.456751  629169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:09.457172  629169 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:09.457400  629169 main.go:141] libmachine: (ha-170194-m04) Calling .GetIP
	I0520 13:35:09.460021  629169 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:09.460421  629169 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:35:09.460448  629169 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:09.460608  629169 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:35:09.460935  629169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:09.460980  629169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:09.475982  629169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46587
	I0520 13:35:09.476457  629169 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:09.476928  629169 main.go:141] libmachine: Using API Version  1
	I0520 13:35:09.476956  629169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:09.477295  629169 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:09.477552  629169 main.go:141] libmachine: (ha-170194-m04) Calling .DriverName
	I0520 13:35:09.477774  629169 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:09.477807  629169 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHHostname
	I0520 13:35:09.480665  629169 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:09.481103  629169 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:35:09.481133  629169 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:09.481383  629169 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHPort
	I0520 13:35:09.481587  629169 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHKeyPath
	I0520 13:35:09.481726  629169 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHUsername
	I0520 13:35:09.481923  629169 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m04/id_rsa Username:docker}
	I0520 13:35:09.565904  629169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:09.582420  629169 status.go:257] ha-170194-m04 status: &{Name:ha-170194-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr: exit status 3 (4.776434341s)

                                                
                                                
-- stdout --
	ha-170194
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-170194-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:35:11.346771  629269 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:35:11.346910  629269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:35:11.346926  629269 out.go:304] Setting ErrFile to fd 2...
	I0520 13:35:11.346932  629269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:35:11.347113  629269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:35:11.347292  629269 out.go:298] Setting JSON to false
	I0520 13:35:11.347317  629269 mustload.go:65] Loading cluster: ha-170194
	I0520 13:35:11.347412  629269 notify.go:220] Checking for updates...
	I0520 13:35:11.347652  629269 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:35:11.347668  629269 status.go:255] checking status of ha-170194 ...
	I0520 13:35:11.348063  629269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:11.348128  629269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:11.367510  629269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I0520 13:35:11.367976  629269 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:11.368662  629269 main.go:141] libmachine: Using API Version  1
	I0520 13:35:11.368686  629269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:11.369094  629269 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:11.369390  629269 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:35:11.371169  629269 status.go:330] ha-170194 host status = "Running" (err=<nil>)
	I0520 13:35:11.371195  629269 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:35:11.371510  629269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:11.371557  629269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:11.387784  629269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33387
	I0520 13:35:11.388302  629269 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:11.388886  629269 main.go:141] libmachine: Using API Version  1
	I0520 13:35:11.388929  629269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:11.389224  629269 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:11.389462  629269 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:35:11.392649  629269 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:11.393124  629269 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:35:11.393156  629269 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:11.393415  629269 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:35:11.393752  629269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:11.393796  629269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:11.410700  629269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33783
	I0520 13:35:11.411272  629269 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:11.411774  629269 main.go:141] libmachine: Using API Version  1
	I0520 13:35:11.411794  629269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:11.412233  629269 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:11.412452  629269 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:35:11.412676  629269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:11.412712  629269 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:35:11.415781  629269 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:11.416317  629269 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:35:11.416349  629269 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:11.416514  629269 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:35:11.416798  629269 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:35:11.416965  629269 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:35:11.417126  629269 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:35:11.497104  629269 ssh_runner.go:195] Run: systemctl --version
	I0520 13:35:11.503350  629269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:11.517429  629269 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:35:11.517469  629269 api_server.go:166] Checking apiserver status ...
	I0520 13:35:11.517511  629269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:35:11.534814  629269 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0520 13:35:11.545069  629269 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:35:11.545131  629269 ssh_runner.go:195] Run: ls
	I0520 13:35:11.549658  629269 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:35:11.554244  629269 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:35:11.554265  629269 status.go:422] ha-170194 apiserver status = Running (err=<nil>)
	I0520 13:35:11.554276  629269 status.go:257] ha-170194 status: &{Name:ha-170194 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:35:11.554296  629269 status.go:255] checking status of ha-170194-m02 ...
	I0520 13:35:11.554651  629269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:11.554691  629269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:11.570875  629269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34181
	I0520 13:35:11.571308  629269 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:11.571769  629269 main.go:141] libmachine: Using API Version  1
	I0520 13:35:11.571797  629269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:11.572130  629269 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:11.572342  629269 main.go:141] libmachine: (ha-170194-m02) Calling .GetState
	I0520 13:35:11.575551  629269 status.go:330] ha-170194-m02 host status = "Running" (err=<nil>)
	I0520 13:35:11.575572  629269 host.go:66] Checking if "ha-170194-m02" exists ...
	I0520 13:35:11.575925  629269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:11.575984  629269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:11.591337  629269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38407
	I0520 13:35:11.591762  629269 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:11.592276  629269 main.go:141] libmachine: Using API Version  1
	I0520 13:35:11.592316  629269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:11.592652  629269 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:11.592840  629269 main.go:141] libmachine: (ha-170194-m02) Calling .GetIP
	I0520 13:35:11.595610  629269 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:35:11.596070  629269 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:35:11.596091  629269 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:35:11.596259  629269 host.go:66] Checking if "ha-170194-m02" exists ...
	I0520 13:35:11.596531  629269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:11.596569  629269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:11.611320  629269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I0520 13:35:11.611763  629269 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:11.612210  629269 main.go:141] libmachine: Using API Version  1
	I0520 13:35:11.612238  629269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:11.612529  629269 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:11.612738  629269 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:35:11.612900  629269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:11.612924  629269 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:35:11.615945  629269 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:35:11.616522  629269 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:35:11.616555  629269 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:35:11.616797  629269 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:35:11.616989  629269 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:35:11.617142  629269 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:35:11.617327  629269 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	W0520 13:35:12.301551  629269 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:35:12.301608  629269 retry.go:31] will retry after 369.363526ms: dial tcp 192.168.39.155:22: connect: no route to host
	W0520 13:35:15.725607  629269 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.155:22: connect: no route to host
	W0520 13:35:15.725783  629269 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	E0520 13:35:15.725816  629269 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:35:15.725827  629269 status.go:257] ha-170194-m02 status: &{Name:ha-170194-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 13:35:15.725876  629269 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:35:15.725892  629269 status.go:255] checking status of ha-170194-m03 ...
	I0520 13:35:15.726376  629269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:15.726451  629269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:15.742513  629269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34277
	I0520 13:35:15.742976  629269 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:15.743470  629269 main.go:141] libmachine: Using API Version  1
	I0520 13:35:15.743499  629269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:15.743878  629269 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:15.744128  629269 main.go:141] libmachine: (ha-170194-m03) Calling .GetState
	I0520 13:35:15.745792  629269 status.go:330] ha-170194-m03 host status = "Running" (err=<nil>)
	I0520 13:35:15.745810  629269 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:35:15.746133  629269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:15.746178  629269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:15.761072  629269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
	I0520 13:35:15.761543  629269 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:15.761961  629269 main.go:141] libmachine: Using API Version  1
	I0520 13:35:15.761980  629269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:15.762367  629269 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:15.762559  629269 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:35:15.766118  629269 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:15.766683  629269 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:35:15.766715  629269 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:15.766932  629269 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:35:15.767257  629269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:15.767296  629269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:15.782966  629269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42487
	I0520 13:35:15.783472  629269 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:15.783899  629269 main.go:141] libmachine: Using API Version  1
	I0520 13:35:15.783925  629269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:15.784216  629269 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:15.784440  629269 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:35:15.784670  629269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:15.784699  629269 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:35:15.787698  629269 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:15.788324  629269 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:35:15.788360  629269 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:15.788568  629269 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:35:15.788761  629269 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:35:15.788946  629269 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:35:15.789139  629269 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:35:15.870050  629269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:15.885753  629269 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:35:15.885788  629269 api_server.go:166] Checking apiserver status ...
	I0520 13:35:15.885825  629269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:35:15.900239  629269 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0520 13:35:15.909713  629269 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:35:15.909780  629269 ssh_runner.go:195] Run: ls
	I0520 13:35:15.913799  629269 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:35:15.920069  629269 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:35:15.920100  629269 status.go:422] ha-170194-m03 apiserver status = Running (err=<nil>)
	I0520 13:35:15.920112  629269 status.go:257] ha-170194-m03 status: &{Name:ha-170194-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:35:15.920132  629269 status.go:255] checking status of ha-170194-m04 ...
	I0520 13:35:15.920947  629269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:15.920994  629269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:15.937873  629269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I0520 13:35:15.938378  629269 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:15.938891  629269 main.go:141] libmachine: Using API Version  1
	I0520 13:35:15.938912  629269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:15.939224  629269 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:15.939453  629269 main.go:141] libmachine: (ha-170194-m04) Calling .GetState
	I0520 13:35:15.941114  629269 status.go:330] ha-170194-m04 host status = "Running" (err=<nil>)
	I0520 13:35:15.941133  629269 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:35:15.941473  629269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:15.941519  629269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:15.957904  629269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39569
	I0520 13:35:15.958333  629269 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:15.958945  629269 main.go:141] libmachine: Using API Version  1
	I0520 13:35:15.958987  629269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:15.959333  629269 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:15.959549  629269 main.go:141] libmachine: (ha-170194-m04) Calling .GetIP
	I0520 13:35:15.962514  629269 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:15.962982  629269 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:35:15.963019  629269 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:15.963154  629269 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:35:15.963494  629269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:15.963534  629269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:15.979539  629269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33633
	I0520 13:35:15.980056  629269 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:15.980587  629269 main.go:141] libmachine: Using API Version  1
	I0520 13:35:15.980606  629269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:15.980985  629269 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:15.981305  629269 main.go:141] libmachine: (ha-170194-m04) Calling .DriverName
	I0520 13:35:15.981504  629269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:15.981529  629269 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHHostname
	I0520 13:35:15.984207  629269 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:15.984622  629269 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:35:15.984649  629269 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:15.984771  629269 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHPort
	I0520 13:35:15.985033  629269 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHKeyPath
	I0520 13:35:15.985228  629269 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHUsername
	I0520 13:35:15.985418  629269 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m04/id_rsa Username:docker}
	I0520 13:35:16.065161  629269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:16.080162  629269 status.go:257] ha-170194-m04 status: &{Name:ha-170194-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr: exit status 3 (4.200908954s)

                                                
                                                
-- stdout --
	ha-170194
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-170194-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:35:18.386578  629370 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:35:18.386862  629370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:35:18.386874  629370 out.go:304] Setting ErrFile to fd 2...
	I0520 13:35:18.386878  629370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:35:18.387083  629370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:35:18.387259  629370 out.go:298] Setting JSON to false
	I0520 13:35:18.387287  629370 mustload.go:65] Loading cluster: ha-170194
	I0520 13:35:18.387333  629370 notify.go:220] Checking for updates...
	I0520 13:35:18.387635  629370 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:35:18.387651  629370 status.go:255] checking status of ha-170194 ...
	I0520 13:35:18.388102  629370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:18.388154  629370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:18.405763  629370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33215
	I0520 13:35:18.406214  629370 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:18.406765  629370 main.go:141] libmachine: Using API Version  1
	I0520 13:35:18.406792  629370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:18.407217  629370 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:18.407439  629370 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:35:18.409139  629370 status.go:330] ha-170194 host status = "Running" (err=<nil>)
	I0520 13:35:18.409155  629370 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:35:18.409478  629370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:18.409518  629370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:18.425574  629370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39751
	I0520 13:35:18.426020  629370 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:18.426614  629370 main.go:141] libmachine: Using API Version  1
	I0520 13:35:18.426657  629370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:18.427037  629370 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:18.427276  629370 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:35:18.430321  629370 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:18.430716  629370 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:35:18.430739  629370 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:18.430913  629370 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:35:18.431245  629370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:18.431291  629370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:18.446964  629370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44601
	I0520 13:35:18.447384  629370 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:18.447927  629370 main.go:141] libmachine: Using API Version  1
	I0520 13:35:18.447955  629370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:18.449709  629370 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:18.450179  629370 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:35:18.450409  629370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:18.450435  629370 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:35:18.454061  629370 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:18.454591  629370 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:35:18.454623  629370 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:18.454768  629370 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:35:18.454947  629370 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:35:18.455093  629370 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:35:18.455205  629370 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:35:18.542829  629370 ssh_runner.go:195] Run: systemctl --version
	I0520 13:35:18.549872  629370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:18.565654  629370 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:35:18.565729  629370 api_server.go:166] Checking apiserver status ...
	I0520 13:35:18.565779  629370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:35:18.580652  629370 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0520 13:35:18.593296  629370 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:35:18.593349  629370 ssh_runner.go:195] Run: ls
	I0520 13:35:18.599003  629370 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:35:18.602957  629370 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:35:18.602982  629370 status.go:422] ha-170194 apiserver status = Running (err=<nil>)
	I0520 13:35:18.602995  629370 status.go:257] ha-170194 status: &{Name:ha-170194 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:35:18.603024  629370 status.go:255] checking status of ha-170194-m02 ...
	I0520 13:35:18.603316  629370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:18.603360  629370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:18.618896  629370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I0520 13:35:18.619308  629370 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:18.619776  629370 main.go:141] libmachine: Using API Version  1
	I0520 13:35:18.619804  629370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:18.620115  629370 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:18.620361  629370 main.go:141] libmachine: (ha-170194-m02) Calling .GetState
	I0520 13:35:18.621855  629370 status.go:330] ha-170194-m02 host status = "Running" (err=<nil>)
	I0520 13:35:18.621874  629370 host.go:66] Checking if "ha-170194-m02" exists ...
	I0520 13:35:18.622166  629370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:18.622202  629370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:18.638579  629370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46551
	I0520 13:35:18.639060  629370 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:18.639596  629370 main.go:141] libmachine: Using API Version  1
	I0520 13:35:18.639620  629370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:18.639950  629370 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:18.640197  629370 main.go:141] libmachine: (ha-170194-m02) Calling .GetIP
	I0520 13:35:18.643192  629370 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:35:18.643642  629370 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:35:18.643671  629370 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:35:18.643831  629370 host.go:66] Checking if "ha-170194-m02" exists ...
	I0520 13:35:18.644165  629370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:18.644207  629370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:18.659225  629370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33767
	I0520 13:35:18.659667  629370 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:18.660072  629370 main.go:141] libmachine: Using API Version  1
	I0520 13:35:18.660095  629370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:18.660514  629370 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:18.660735  629370 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:35:18.660945  629370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:18.660967  629370 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:35:18.663703  629370 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:35:18.664136  629370 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:35:18.664167  629370 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:35:18.664315  629370 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:35:18.664500  629370 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:35:18.664653  629370 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:35:18.664790  629370 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	W0520 13:35:18.797410  629370 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:35:18.797457  629370 retry.go:31] will retry after 320.271299ms: dial tcp 192.168.39.155:22: connect: no route to host
	W0520 13:35:22.193522  629370 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.155:22: connect: no route to host
	W0520 13:35:22.193614  629370 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	E0520 13:35:22.193633  629370 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:35:22.193642  629370 status.go:257] ha-170194-m02 status: &{Name:ha-170194-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 13:35:22.193668  629370 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:35:22.193681  629370 status.go:255] checking status of ha-170194-m03 ...
	I0520 13:35:22.194126  629370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:22.194185  629370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:22.212101  629370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39321
	I0520 13:35:22.212689  629370 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:22.213278  629370 main.go:141] libmachine: Using API Version  1
	I0520 13:35:22.213307  629370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:22.213668  629370 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:22.213925  629370 main.go:141] libmachine: (ha-170194-m03) Calling .GetState
	I0520 13:35:22.215445  629370 status.go:330] ha-170194-m03 host status = "Running" (err=<nil>)
	I0520 13:35:22.215463  629370 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:35:22.215773  629370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:22.215809  629370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:22.231811  629370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39663
	I0520 13:35:22.232339  629370 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:22.232868  629370 main.go:141] libmachine: Using API Version  1
	I0520 13:35:22.232898  629370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:22.233265  629370 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:22.233517  629370 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:35:22.236324  629370 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:22.236704  629370 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:35:22.236728  629370 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:22.236866  629370 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:35:22.237202  629370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:22.237241  629370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:22.252080  629370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36495
	I0520 13:35:22.252487  629370 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:22.253011  629370 main.go:141] libmachine: Using API Version  1
	I0520 13:35:22.253041  629370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:22.253396  629370 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:22.253615  629370 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:35:22.253832  629370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:22.253853  629370 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:35:22.256746  629370 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:22.257184  629370 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:35:22.257216  629370 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:22.257353  629370 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:35:22.257548  629370 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:35:22.257742  629370 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:35:22.257920  629370 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:35:22.337472  629370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:22.353784  629370 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:35:22.353813  629370 api_server.go:166] Checking apiserver status ...
	I0520 13:35:22.353854  629370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:35:22.367581  629370 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0520 13:35:22.376471  629370 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:35:22.376532  629370 ssh_runner.go:195] Run: ls
	I0520 13:35:22.380670  629370 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:35:22.387011  629370 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:35:22.387037  629370 status.go:422] ha-170194-m03 apiserver status = Running (err=<nil>)
	I0520 13:35:22.387049  629370 status.go:257] ha-170194-m03 status: &{Name:ha-170194-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:35:22.387079  629370 status.go:255] checking status of ha-170194-m04 ...
	I0520 13:35:22.387396  629370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:22.387439  629370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:22.404402  629370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32861
	I0520 13:35:22.404927  629370 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:22.405572  629370 main.go:141] libmachine: Using API Version  1
	I0520 13:35:22.405619  629370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:22.406028  629370 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:22.406238  629370 main.go:141] libmachine: (ha-170194-m04) Calling .GetState
	I0520 13:35:22.408083  629370 status.go:330] ha-170194-m04 host status = "Running" (err=<nil>)
	I0520 13:35:22.408101  629370 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:35:22.408464  629370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:22.408507  629370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:22.424402  629370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45005
	I0520 13:35:22.424889  629370 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:22.425469  629370 main.go:141] libmachine: Using API Version  1
	I0520 13:35:22.425492  629370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:22.425879  629370 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:22.426109  629370 main.go:141] libmachine: (ha-170194-m04) Calling .GetIP
	I0520 13:35:22.428727  629370 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:22.429202  629370 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:35:22.429235  629370 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:22.429335  629370 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:35:22.429721  629370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:22.429761  629370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:22.444566  629370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37665
	I0520 13:35:22.445068  629370 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:22.445553  629370 main.go:141] libmachine: Using API Version  1
	I0520 13:35:22.445575  629370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:22.445866  629370 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:22.446056  629370 main.go:141] libmachine: (ha-170194-m04) Calling .DriverName
	I0520 13:35:22.446243  629370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:22.446264  629370 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHHostname
	I0520 13:35:22.449145  629370 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:22.449567  629370 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:35:22.449596  629370 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:22.449693  629370 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHPort
	I0520 13:35:22.449848  629370 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHKeyPath
	I0520 13:35:22.450015  629370 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHUsername
	I0520 13:35:22.450146  629370 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m04/id_rsa Username:docker}
	I0520 13:35:22.528914  629370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:22.542708  629370 status.go:257] ha-170194-m04 status: &{Name:ha-170194-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr: exit status 3 (3.70370157s)

                                                
                                                
-- stdout --
	ha-170194
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-170194-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:35:27.445113  629487 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:35:27.445380  629487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:35:27.445390  629487 out.go:304] Setting ErrFile to fd 2...
	I0520 13:35:27.445394  629487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:35:27.445566  629487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:35:27.445735  629487 out.go:298] Setting JSON to false
	I0520 13:35:27.445759  629487 mustload.go:65] Loading cluster: ha-170194
	I0520 13:35:27.445800  629487 notify.go:220] Checking for updates...
	I0520 13:35:27.446164  629487 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:35:27.446178  629487 status.go:255] checking status of ha-170194 ...
	I0520 13:35:27.446539  629487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:27.446601  629487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:27.466590  629487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39209
	I0520 13:35:27.467161  629487 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:27.467769  629487 main.go:141] libmachine: Using API Version  1
	I0520 13:35:27.467796  629487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:27.468288  629487 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:27.468556  629487 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:35:27.470482  629487 status.go:330] ha-170194 host status = "Running" (err=<nil>)
	I0520 13:35:27.470502  629487 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:35:27.470934  629487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:27.470995  629487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:27.487373  629487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36145
	I0520 13:35:27.487891  629487 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:27.488490  629487 main.go:141] libmachine: Using API Version  1
	I0520 13:35:27.488523  629487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:27.488955  629487 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:27.489172  629487 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:35:27.491866  629487 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:27.492259  629487 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:35:27.492303  629487 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:27.492421  629487 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:35:27.492710  629487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:27.492744  629487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:27.510397  629487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34723
	I0520 13:35:27.511020  629487 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:27.511496  629487 main.go:141] libmachine: Using API Version  1
	I0520 13:35:27.511520  629487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:27.511873  629487 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:27.512069  629487 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:35:27.512293  629487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:27.512330  629487 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:35:27.514854  629487 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:27.515318  629487 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:35:27.515379  629487 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:27.515453  629487 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:35:27.515666  629487 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:35:27.515806  629487 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:35:27.515964  629487 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:35:27.596928  629487 ssh_runner.go:195] Run: systemctl --version
	I0520 13:35:27.602623  629487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:27.618423  629487 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:35:27.618466  629487 api_server.go:166] Checking apiserver status ...
	I0520 13:35:27.618514  629487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:35:27.633033  629487 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0520 13:35:27.642499  629487 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:35:27.642570  629487 ssh_runner.go:195] Run: ls
	I0520 13:35:27.647435  629487 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:35:27.651956  629487 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:35:27.651981  629487 status.go:422] ha-170194 apiserver status = Running (err=<nil>)
	I0520 13:35:27.651994  629487 status.go:257] ha-170194 status: &{Name:ha-170194 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:35:27.652018  629487 status.go:255] checking status of ha-170194-m02 ...
	I0520 13:35:27.652335  629487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:27.652383  629487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:27.668891  629487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37673
	I0520 13:35:27.669371  629487 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:27.669907  629487 main.go:141] libmachine: Using API Version  1
	I0520 13:35:27.669936  629487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:27.670313  629487 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:27.670584  629487 main.go:141] libmachine: (ha-170194-m02) Calling .GetState
	I0520 13:35:27.672472  629487 status.go:330] ha-170194-m02 host status = "Running" (err=<nil>)
	I0520 13:35:27.672493  629487 host.go:66] Checking if "ha-170194-m02" exists ...
	I0520 13:35:27.672826  629487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:27.672862  629487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:27.687829  629487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I0520 13:35:27.688256  629487 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:27.688727  629487 main.go:141] libmachine: Using API Version  1
	I0520 13:35:27.688751  629487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:27.689139  629487 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:27.689366  629487 main.go:141] libmachine: (ha-170194-m02) Calling .GetIP
	I0520 13:35:27.692469  629487 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:35:27.692916  629487 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:35:27.692944  629487 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:35:27.693062  629487 host.go:66] Checking if "ha-170194-m02" exists ...
	I0520 13:35:27.693419  629487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:27.693456  629487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:27.708141  629487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38591
	I0520 13:35:27.708575  629487 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:27.709012  629487 main.go:141] libmachine: Using API Version  1
	I0520 13:35:27.709039  629487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:27.709448  629487 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:27.709646  629487 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:35:27.709847  629487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:27.709870  629487 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:35:27.712383  629487 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:35:27.712747  629487 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:35:27.712777  629487 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:35:27.712938  629487 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:35:27.713124  629487 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:35:27.713313  629487 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:35:27.713470  629487 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	W0520 13:35:30.765532  629487 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.155:22: connect: no route to host
	W0520 13:35:30.765639  629487 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	E0520 13:35:30.765666  629487 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:35:30.765677  629487 status.go:257] ha-170194-m02 status: &{Name:ha-170194-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 13:35:30.765700  629487 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	I0520 13:35:30.765707  629487 status.go:255] checking status of ha-170194-m03 ...
	I0520 13:35:30.766043  629487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:30.766093  629487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:30.780865  629487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0520 13:35:30.781386  629487 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:30.781973  629487 main.go:141] libmachine: Using API Version  1
	I0520 13:35:30.781998  629487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:30.782333  629487 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:30.782532  629487 main.go:141] libmachine: (ha-170194-m03) Calling .GetState
	I0520 13:35:30.784260  629487 status.go:330] ha-170194-m03 host status = "Running" (err=<nil>)
	I0520 13:35:30.784276  629487 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:35:30.784548  629487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:30.784582  629487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:30.799148  629487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32887
	I0520 13:35:30.799693  629487 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:30.800202  629487 main.go:141] libmachine: Using API Version  1
	I0520 13:35:30.800219  629487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:30.800532  629487 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:30.800733  629487 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:35:30.803815  629487 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:30.804271  629487 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:35:30.804300  629487 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:30.804394  629487 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:35:30.804721  629487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:30.804763  629487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:30.820089  629487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37785
	I0520 13:35:30.820572  629487 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:30.821019  629487 main.go:141] libmachine: Using API Version  1
	I0520 13:35:30.821043  629487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:30.821458  629487 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:30.821730  629487 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:35:30.822108  629487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:30.822134  629487 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:35:30.825503  629487 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:30.826013  629487 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:35:30.826046  629487 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:30.826228  629487 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:35:30.826456  629487 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:35:30.826653  629487 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:35:30.826857  629487 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:35:30.905946  629487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:30.920531  629487 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:35:30.920556  629487 api_server.go:166] Checking apiserver status ...
	I0520 13:35:30.920591  629487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:35:30.935173  629487 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0520 13:35:30.944550  629487 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:35:30.944607  629487 ssh_runner.go:195] Run: ls
	I0520 13:35:30.948705  629487 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:35:30.952729  629487 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:35:30.952751  629487 status.go:422] ha-170194-m03 apiserver status = Running (err=<nil>)
	I0520 13:35:30.952759  629487 status.go:257] ha-170194-m03 status: &{Name:ha-170194-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:35:30.952783  629487 status.go:255] checking status of ha-170194-m04 ...
	I0520 13:35:30.953110  629487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:30.953148  629487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:30.968488  629487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42391
	I0520 13:35:30.968884  629487 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:30.969459  629487 main.go:141] libmachine: Using API Version  1
	I0520 13:35:30.969485  629487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:30.969834  629487 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:30.970057  629487 main.go:141] libmachine: (ha-170194-m04) Calling .GetState
	I0520 13:35:30.971610  629487 status.go:330] ha-170194-m04 host status = "Running" (err=<nil>)
	I0520 13:35:30.971627  629487 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:35:30.971892  629487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:30.971922  629487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:30.987003  629487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44173
	I0520 13:35:30.987458  629487 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:30.987985  629487 main.go:141] libmachine: Using API Version  1
	I0520 13:35:30.988014  629487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:30.988451  629487 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:30.988680  629487 main.go:141] libmachine: (ha-170194-m04) Calling .GetIP
	I0520 13:35:30.991381  629487 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:30.991802  629487 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:35:30.991834  629487 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:30.992002  629487 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:35:30.992276  629487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:30.992311  629487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:31.008276  629487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45933
	I0520 13:35:31.008730  629487 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:31.009224  629487 main.go:141] libmachine: Using API Version  1
	I0520 13:35:31.009265  629487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:31.009595  629487 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:31.009817  629487 main.go:141] libmachine: (ha-170194-m04) Calling .DriverName
	I0520 13:35:31.010028  629487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:31.010051  629487 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHHostname
	I0520 13:35:31.012817  629487 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:31.013513  629487 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:35:31.013543  629487 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:31.013778  629487 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHPort
	I0520 13:35:31.013975  629487 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHKeyPath
	I0520 13:35:31.014109  629487 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHUsername
	I0520 13:35:31.014233  629487 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m04/id_rsa Username:docker}
	I0520 13:35:31.088488  629487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:31.102388  629487 status.go:257] ha-170194-m04 status: &{Name:ha-170194-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr: exit status 7 (628.175098ms)

                                                
                                                
-- stdout --
	ha-170194
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-170194-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:35:41.910155  629640 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:35:41.910447  629640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:35:41.910459  629640 out.go:304] Setting ErrFile to fd 2...
	I0520 13:35:41.910463  629640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:35:41.910620  629640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:35:41.910864  629640 out.go:298] Setting JSON to false
	I0520 13:35:41.910898  629640 mustload.go:65] Loading cluster: ha-170194
	I0520 13:35:41.911047  629640 notify.go:220] Checking for updates...
	I0520 13:35:41.911242  629640 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:35:41.911259  629640 status.go:255] checking status of ha-170194 ...
	I0520 13:35:41.911658  629640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:41.911718  629640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:41.930133  629640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35699
	I0520 13:35:41.930769  629640 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:41.931463  629640 main.go:141] libmachine: Using API Version  1
	I0520 13:35:41.931490  629640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:41.931850  629640 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:41.932078  629640 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:35:41.933856  629640 status.go:330] ha-170194 host status = "Running" (err=<nil>)
	I0520 13:35:41.933885  629640 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:35:41.934309  629640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:41.934378  629640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:41.949476  629640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40331
	I0520 13:35:41.949983  629640 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:41.950613  629640 main.go:141] libmachine: Using API Version  1
	I0520 13:35:41.950640  629640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:41.951034  629640 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:41.951267  629640 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:35:41.954670  629640 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:41.955202  629640 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:35:41.955238  629640 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:41.955415  629640 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:35:41.955869  629640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:41.955923  629640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:41.974623  629640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I0520 13:35:41.975248  629640 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:41.975865  629640 main.go:141] libmachine: Using API Version  1
	I0520 13:35:41.975895  629640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:41.976275  629640 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:41.976475  629640 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:35:41.976703  629640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:41.976730  629640 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:35:41.979864  629640 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:41.980294  629640 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:35:41.980321  629640 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:41.980483  629640 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:35:41.980681  629640 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:35:41.980878  629640 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:35:41.981031  629640 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:35:42.069308  629640 ssh_runner.go:195] Run: systemctl --version
	I0520 13:35:42.076108  629640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:42.092613  629640 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:35:42.092654  629640 api_server.go:166] Checking apiserver status ...
	I0520 13:35:42.092697  629640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:35:42.107767  629640 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0520 13:35:42.117686  629640 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:35:42.117745  629640 ssh_runner.go:195] Run: ls
	I0520 13:35:42.121939  629640 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:35:42.128046  629640 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:35:42.128072  629640 status.go:422] ha-170194 apiserver status = Running (err=<nil>)
	I0520 13:35:42.128088  629640 status.go:257] ha-170194 status: &{Name:ha-170194 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:35:42.128115  629640 status.go:255] checking status of ha-170194-m02 ...
	I0520 13:35:42.128552  629640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:42.128603  629640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:42.145079  629640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I0520 13:35:42.145553  629640 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:42.146236  629640 main.go:141] libmachine: Using API Version  1
	I0520 13:35:42.146266  629640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:42.146623  629640 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:42.146879  629640 main.go:141] libmachine: (ha-170194-m02) Calling .GetState
	I0520 13:35:42.148640  629640 status.go:330] ha-170194-m02 host status = "Stopped" (err=<nil>)
	I0520 13:35:42.148652  629640 status.go:343] host is not running, skipping remaining checks
	I0520 13:35:42.148658  629640 status.go:257] ha-170194-m02 status: &{Name:ha-170194-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:35:42.148675  629640 status.go:255] checking status of ha-170194-m03 ...
	I0520 13:35:42.148931  629640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:42.148964  629640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:42.163520  629640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36965
	I0520 13:35:42.163992  629640 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:42.164539  629640 main.go:141] libmachine: Using API Version  1
	I0520 13:35:42.164569  629640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:42.164876  629640 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:42.165095  629640 main.go:141] libmachine: (ha-170194-m03) Calling .GetState
	I0520 13:35:42.166738  629640 status.go:330] ha-170194-m03 host status = "Running" (err=<nil>)
	I0520 13:35:42.166756  629640 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:35:42.167078  629640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:42.167112  629640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:42.182420  629640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I0520 13:35:42.182905  629640 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:42.183479  629640 main.go:141] libmachine: Using API Version  1
	I0520 13:35:42.183510  629640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:42.183884  629640 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:42.184133  629640 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:35:42.187219  629640 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:42.187644  629640 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:35:42.187675  629640 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:42.187764  629640 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:35:42.188057  629640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:42.188091  629640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:42.203403  629640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40265
	I0520 13:35:42.203904  629640 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:42.204331  629640 main.go:141] libmachine: Using API Version  1
	I0520 13:35:42.204352  629640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:42.204766  629640 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:42.204948  629640 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:35:42.205142  629640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:42.205171  629640 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:35:42.208205  629640 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:42.208653  629640 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:35:42.208685  629640 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:42.208831  629640 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:35:42.208998  629640 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:35:42.209154  629640 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:35:42.209350  629640 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:35:42.293641  629640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:42.310125  629640 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:35:42.310154  629640 api_server.go:166] Checking apiserver status ...
	I0520 13:35:42.310202  629640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:35:42.323551  629640 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0520 13:35:42.333029  629640 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:35:42.333088  629640 ssh_runner.go:195] Run: ls
	I0520 13:35:42.338121  629640 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:35:42.342558  629640 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:35:42.342581  629640 status.go:422] ha-170194-m03 apiserver status = Running (err=<nil>)
	I0520 13:35:42.342592  629640 status.go:257] ha-170194-m03 status: &{Name:ha-170194-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:35:42.342615  629640 status.go:255] checking status of ha-170194-m04 ...
	I0520 13:35:42.342910  629640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:42.342966  629640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:42.359668  629640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0520 13:35:42.360159  629640 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:42.360604  629640 main.go:141] libmachine: Using API Version  1
	I0520 13:35:42.360623  629640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:42.360956  629640 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:42.361181  629640 main.go:141] libmachine: (ha-170194-m04) Calling .GetState
	I0520 13:35:42.362778  629640 status.go:330] ha-170194-m04 host status = "Running" (err=<nil>)
	I0520 13:35:42.362798  629640 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:35:42.363245  629640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:42.363288  629640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:42.378980  629640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40133
	I0520 13:35:42.379405  629640 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:42.379851  629640 main.go:141] libmachine: Using API Version  1
	I0520 13:35:42.379868  629640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:42.380200  629640 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:42.380395  629640 main.go:141] libmachine: (ha-170194-m04) Calling .GetIP
	I0520 13:35:42.383135  629640 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:42.383572  629640 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:35:42.383601  629640 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:42.383710  629640 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:35:42.384003  629640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:42.384050  629640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:42.398626  629640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38727
	I0520 13:35:42.399059  629640 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:42.399528  629640 main.go:141] libmachine: Using API Version  1
	I0520 13:35:42.399561  629640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:42.399846  629640 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:42.400009  629640 main.go:141] libmachine: (ha-170194-m04) Calling .DriverName
	I0520 13:35:42.400225  629640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:42.400249  629640 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHHostname
	I0520 13:35:42.402852  629640 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:42.403332  629640 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:35:42.403357  629640 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:42.403534  629640 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHPort
	I0520 13:35:42.403723  629640 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHKeyPath
	I0520 13:35:42.403889  629640 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHUsername
	I0520 13:35:42.404053  629640 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m04/id_rsa Username:docker}
	I0520 13:35:42.481132  629640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:42.494594  629640 status.go:257] ha-170194-m04 status: &{Name:ha-170194-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr: exit status 7 (608.024594ms)

                                                
                                                
-- stdout --
	ha-170194
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-170194-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:35:52.385975  629744 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:35:52.386259  629744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:35:52.386308  629744 out.go:304] Setting ErrFile to fd 2...
	I0520 13:35:52.386320  629744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:35:52.386541  629744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:35:52.386712  629744 out.go:298] Setting JSON to false
	I0520 13:35:52.386740  629744 mustload.go:65] Loading cluster: ha-170194
	I0520 13:35:52.386780  629744 notify.go:220] Checking for updates...
	I0520 13:35:52.387102  629744 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:35:52.387117  629744 status.go:255] checking status of ha-170194 ...
	I0520 13:35:52.387491  629744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:52.387562  629744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:52.406235  629744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34375
	I0520 13:35:52.406743  629744 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:52.407503  629744 main.go:141] libmachine: Using API Version  1
	I0520 13:35:52.407528  629744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:52.407859  629744 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:52.408045  629744 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:35:52.409678  629744 status.go:330] ha-170194 host status = "Running" (err=<nil>)
	I0520 13:35:52.409698  629744 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:35:52.409963  629744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:52.410006  629744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:52.424219  629744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42241
	I0520 13:35:52.424631  629744 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:52.425171  629744 main.go:141] libmachine: Using API Version  1
	I0520 13:35:52.425199  629744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:52.425590  629744 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:52.425804  629744 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:35:52.428731  629744 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:52.429164  629744 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:35:52.429201  629744 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:52.429381  629744 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:35:52.429670  629744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:52.429712  629744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:52.444021  629744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40621
	I0520 13:35:52.444494  629744 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:52.445360  629744 main.go:141] libmachine: Using API Version  1
	I0520 13:35:52.445449  629744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:52.446425  629744 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:52.447131  629744 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:35:52.447544  629744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:52.447597  629744 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:35:52.450353  629744 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:52.450920  629744 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:35:52.450946  629744 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:35:52.451127  629744 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:35:52.451312  629744 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:35:52.451479  629744 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:35:52.451639  629744 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:35:52.533739  629744 ssh_runner.go:195] Run: systemctl --version
	I0520 13:35:52.540619  629744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:52.559834  629744 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:35:52.559875  629744 api_server.go:166] Checking apiserver status ...
	I0520 13:35:52.559907  629744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:35:52.573924  629744 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0520 13:35:52.582861  629744 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:35:52.582920  629744 ssh_runner.go:195] Run: ls
	I0520 13:35:52.587139  629744 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:35:52.592825  629744 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:35:52.592854  629744 status.go:422] ha-170194 apiserver status = Running (err=<nil>)
	I0520 13:35:52.592865  629744 status.go:257] ha-170194 status: &{Name:ha-170194 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:35:52.592886  629744 status.go:255] checking status of ha-170194-m02 ...
	I0520 13:35:52.593300  629744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:52.593347  629744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:52.608814  629744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0520 13:35:52.609229  629744 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:52.609773  629744 main.go:141] libmachine: Using API Version  1
	I0520 13:35:52.609796  629744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:52.610141  629744 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:52.610417  629744 main.go:141] libmachine: (ha-170194-m02) Calling .GetState
	I0520 13:35:52.612223  629744 status.go:330] ha-170194-m02 host status = "Stopped" (err=<nil>)
	I0520 13:35:52.612239  629744 status.go:343] host is not running, skipping remaining checks
	I0520 13:35:52.612248  629744 status.go:257] ha-170194-m02 status: &{Name:ha-170194-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:35:52.612267  629744 status.go:255] checking status of ha-170194-m03 ...
	I0520 13:35:52.612531  629744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:52.612580  629744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:52.627451  629744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36757
	I0520 13:35:52.627958  629744 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:52.628468  629744 main.go:141] libmachine: Using API Version  1
	I0520 13:35:52.628494  629744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:52.628882  629744 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:52.629118  629744 main.go:141] libmachine: (ha-170194-m03) Calling .GetState
	I0520 13:35:52.630659  629744 status.go:330] ha-170194-m03 host status = "Running" (err=<nil>)
	I0520 13:35:52.630677  629744 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:35:52.630983  629744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:52.631039  629744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:52.645804  629744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44669
	I0520 13:35:52.646182  629744 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:52.646617  629744 main.go:141] libmachine: Using API Version  1
	I0520 13:35:52.646636  629744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:52.646918  629744 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:52.647116  629744 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:35:52.650074  629744 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:52.650510  629744 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:35:52.650540  629744 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:52.650651  629744 host.go:66] Checking if "ha-170194-m03" exists ...
	I0520 13:35:52.650947  629744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:52.650988  629744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:52.667108  629744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33019
	I0520 13:35:52.667517  629744 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:52.667947  629744 main.go:141] libmachine: Using API Version  1
	I0520 13:35:52.667972  629744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:52.668318  629744 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:52.668522  629744 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:35:52.668745  629744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:52.668764  629744 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:35:52.671414  629744 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:52.671834  629744 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:35:52.671854  629744 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:52.671998  629744 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:35:52.672171  629744 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:35:52.672334  629744 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:35:52.672467  629744 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:35:52.749120  629744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:52.764510  629744 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:35:52.764540  629744 api_server.go:166] Checking apiserver status ...
	I0520 13:35:52.764570  629744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:35:52.779756  629744 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0520 13:35:52.789205  629744 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:35:52.789287  629744 ssh_runner.go:195] Run: ls
	I0520 13:35:52.793432  629744 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:35:52.797599  629744 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:35:52.797624  629744 status.go:422] ha-170194-m03 apiserver status = Running (err=<nil>)
	I0520 13:35:52.797635  629744 status.go:257] ha-170194-m03 status: &{Name:ha-170194-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:35:52.797651  629744 status.go:255] checking status of ha-170194-m04 ...
	I0520 13:35:52.798028  629744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:52.798073  629744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:52.814190  629744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35341
	I0520 13:35:52.814658  629744 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:52.815169  629744 main.go:141] libmachine: Using API Version  1
	I0520 13:35:52.815193  629744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:52.815572  629744 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:52.815781  629744 main.go:141] libmachine: (ha-170194-m04) Calling .GetState
	I0520 13:35:52.817556  629744 status.go:330] ha-170194-m04 host status = "Running" (err=<nil>)
	I0520 13:35:52.817574  629744 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:35:52.817920  629744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:52.817961  629744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:52.832611  629744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45535
	I0520 13:35:52.833060  629744 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:52.833738  629744 main.go:141] libmachine: Using API Version  1
	I0520 13:35:52.833771  629744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:52.834155  629744 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:52.834361  629744 main.go:141] libmachine: (ha-170194-m04) Calling .GetIP
	I0520 13:35:52.837603  629744 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:52.838044  629744 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:35:52.838077  629744 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:52.838233  629744 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:35:52.838604  629744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:52.838638  629744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:52.853376  629744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I0520 13:35:52.853766  629744 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:52.854289  629744 main.go:141] libmachine: Using API Version  1
	I0520 13:35:52.854308  629744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:52.854599  629744 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:52.854824  629744 main.go:141] libmachine: (ha-170194-m04) Calling .DriverName
	I0520 13:35:52.855029  629744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:35:52.855056  629744 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHHostname
	I0520 13:35:52.857912  629744 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:52.858382  629744 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:35:52.858410  629744 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:52.858575  629744 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHPort
	I0520 13:35:52.858732  629744 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHKeyPath
	I0520 13:35:52.858895  629744 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHUsername
	I0520 13:35:52.859042  629744 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m04/id_rsa Username:docker}
	I0520 13:35:52.936695  629744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:35:52.950269  629744 status.go:257] ha-170194-m04 status: &{Name:ha-170194-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-170194 -n ha-170194
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-170194 logs -n 25: (1.4154563s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m03:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194:/home/docker/cp-test_ha-170194-m03_ha-170194.txt                       |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194 sudo cat                                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m03_ha-170194.txt                                 |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m03:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m02:/home/docker/cp-test_ha-170194-m03_ha-170194-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194-m02 sudo cat                                          | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m03_ha-170194-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m03:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04:/home/docker/cp-test_ha-170194-m03_ha-170194-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194-m04 sudo cat                                          | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m03_ha-170194-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-170194 cp testdata/cp-test.txt                                                | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1524472985/001/cp-test_ha-170194-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194:/home/docker/cp-test_ha-170194-m04_ha-170194.txt                       |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194 sudo cat                                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m04_ha-170194.txt                                 |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m02:/home/docker/cp-test_ha-170194-m04_ha-170194-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194-m02 sudo cat                                          | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m04_ha-170194-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m03:/home/docker/cp-test_ha-170194-m04_ha-170194-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194-m03 sudo cat                                          | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m04_ha-170194-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-170194 node stop m02 -v=7                                                     | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-170194 node start m02 -v=7                                                    | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 13:27:54
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 13:27:54.787808  624195 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:27:54.788072  624195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:27:54.788083  624195 out.go:304] Setting ErrFile to fd 2...
	I0520 13:27:54.788090  624195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:27:54.788302  624195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:27:54.788863  624195 out.go:298] Setting JSON to false
	I0520 13:27:54.789842  624195 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11415,"bootTime":1716200260,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 13:27:54.789902  624195 start.go:139] virtualization: kvm guest
	I0520 13:27:54.792915  624195 out.go:177] * [ha-170194] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 13:27:54.795227  624195 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 13:27:54.795193  624195 notify.go:220] Checking for updates...
	I0520 13:27:54.797364  624195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 13:27:54.799684  624195 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 13:27:54.801844  624195 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:27:54.803952  624195 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 13:27:54.805891  624195 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 13:27:54.807989  624195 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 13:27:54.843729  624195 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 13:27:54.845864  624195 start.go:297] selected driver: kvm2
	I0520 13:27:54.845891  624195 start.go:901] validating driver "kvm2" against <nil>
	I0520 13:27:54.845909  624195 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 13:27:54.846658  624195 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:27:54.846750  624195 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 13:27:54.862551  624195 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 13:27:54.862617  624195 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 13:27:54.862816  624195 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 13:27:54.862872  624195 cni.go:84] Creating CNI manager for ""
	I0520 13:27:54.862883  624195 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 13:27:54.862888  624195 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 13:27:54.862953  624195 start.go:340] cluster config:
	{Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0520 13:27:54.863053  624195 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:27:54.865627  624195 out.go:177] * Starting "ha-170194" primary control-plane node in "ha-170194" cluster
	I0520 13:27:54.867679  624195 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:27:54.867715  624195 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 13:27:54.867723  624195 cache.go:56] Caching tarball of preloaded images
	I0520 13:27:54.867784  624195 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:27:54.867794  624195 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 13:27:54.868073  624195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:27:54.868092  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json: {Name:mk4d4f049f9025d6d1dcc6479cee744453ad1838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:27:54.868224  624195 start.go:360] acquireMachinesLock for ha-170194: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:27:54.868254  624195 start.go:364] duration metric: took 16.059µs to acquireMachinesLock for "ha-170194"
	I0520 13:27:54.868268  624195 start.go:93] Provisioning new machine with config: &{Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:27:54.868333  624195 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 13:27:54.870806  624195 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 13:27:54.870938  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:27:54.870974  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:27:54.885147  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42089
	I0520 13:27:54.885650  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:27:54.886178  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:27:54.886201  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:27:54.886581  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:27:54.886848  624195 main.go:141] libmachine: (ha-170194) Calling .GetMachineName
	I0520 13:27:54.887034  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:27:54.887235  624195 start.go:159] libmachine.API.Create for "ha-170194" (driver="kvm2")
	I0520 13:27:54.887272  624195 client.go:168] LocalClient.Create starting
	I0520 13:27:54.887319  624195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem
	I0520 13:27:54.887353  624195 main.go:141] libmachine: Decoding PEM data...
	I0520 13:27:54.887379  624195 main.go:141] libmachine: Parsing certificate...
	I0520 13:27:54.887474  624195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem
	I0520 13:27:54.887508  624195 main.go:141] libmachine: Decoding PEM data...
	I0520 13:27:54.887522  624195 main.go:141] libmachine: Parsing certificate...
	I0520 13:27:54.887544  624195 main.go:141] libmachine: Running pre-create checks...
	I0520 13:27:54.887555  624195 main.go:141] libmachine: (ha-170194) Calling .PreCreateCheck
	I0520 13:27:54.887917  624195 main.go:141] libmachine: (ha-170194) Calling .GetConfigRaw
	I0520 13:27:54.888401  624195 main.go:141] libmachine: Creating machine...
	I0520 13:27:54.888423  624195 main.go:141] libmachine: (ha-170194) Calling .Create
	I0520 13:27:54.888571  624195 main.go:141] libmachine: (ha-170194) Creating KVM machine...
	I0520 13:27:54.889884  624195 main.go:141] libmachine: (ha-170194) DBG | found existing default KVM network
	I0520 13:27:54.890580  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:54.890457  624219 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c10}
	I0520 13:27:54.890611  624195 main.go:141] libmachine: (ha-170194) DBG | created network xml: 
	I0520 13:27:54.890624  624195 main.go:141] libmachine: (ha-170194) DBG | <network>
	I0520 13:27:54.890637  624195 main.go:141] libmachine: (ha-170194) DBG |   <name>mk-ha-170194</name>
	I0520 13:27:54.890646  624195 main.go:141] libmachine: (ha-170194) DBG |   <dns enable='no'/>
	I0520 13:27:54.890663  624195 main.go:141] libmachine: (ha-170194) DBG |   
	I0520 13:27:54.890675  624195 main.go:141] libmachine: (ha-170194) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 13:27:54.890690  624195 main.go:141] libmachine: (ha-170194) DBG |     <dhcp>
	I0520 13:27:54.890711  624195 main.go:141] libmachine: (ha-170194) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 13:27:54.890728  624195 main.go:141] libmachine: (ha-170194) DBG |     </dhcp>
	I0520 13:27:54.890738  624195 main.go:141] libmachine: (ha-170194) DBG |   </ip>
	I0520 13:27:54.890748  624195 main.go:141] libmachine: (ha-170194) DBG |   
	I0520 13:27:54.890761  624195 main.go:141] libmachine: (ha-170194) DBG | </network>
	I0520 13:27:54.890777  624195 main.go:141] libmachine: (ha-170194) DBG | 
	I0520 13:27:54.896065  624195 main.go:141] libmachine: (ha-170194) DBG | trying to create private KVM network mk-ha-170194 192.168.39.0/24...
	I0520 13:27:54.967027  624195 main.go:141] libmachine: (ha-170194) DBG | private KVM network mk-ha-170194 192.168.39.0/24 created
	I0520 13:27:54.967086  624195 main.go:141] libmachine: (ha-170194) Setting up store path in /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194 ...
	I0520 13:27:54.967102  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:54.966962  624219 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:27:54.967468  624195 main.go:141] libmachine: (ha-170194) Building disk image from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 13:27:54.967612  624195 main.go:141] libmachine: (ha-170194) Downloading /home/jenkins/minikube-integration/18929-602525/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 13:27:55.252359  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:55.252215  624219 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa...
	I0520 13:27:55.368707  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:55.368606  624219 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/ha-170194.rawdisk...
	I0520 13:27:55.368742  624195 main.go:141] libmachine: (ha-170194) DBG | Writing magic tar header
	I0520 13:27:55.368754  624195 main.go:141] libmachine: (ha-170194) DBG | Writing SSH key tar header
	I0520 13:27:55.368766  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:55.368730  624219 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194 ...
	I0520 13:27:55.368900  624195 main.go:141] libmachine: (ha-170194) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194
	I0520 13:27:55.368933  624195 main.go:141] libmachine: (ha-170194) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194 (perms=drwx------)
	I0520 13:27:55.368949  624195 main.go:141] libmachine: (ha-170194) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines
	I0520 13:27:55.368963  624195 main.go:141] libmachine: (ha-170194) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:27:55.368976  624195 main.go:141] libmachine: (ha-170194) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525
	I0520 13:27:55.368992  624195 main.go:141] libmachine: (ha-170194) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 13:27:55.369000  624195 main.go:141] libmachine: (ha-170194) DBG | Checking permissions on dir: /home/jenkins
	I0520 13:27:55.369009  624195 main.go:141] libmachine: (ha-170194) DBG | Checking permissions on dir: /home
	I0520 13:27:55.369015  624195 main.go:141] libmachine: (ha-170194) DBG | Skipping /home - not owner
	I0520 13:27:55.369027  624195 main.go:141] libmachine: (ha-170194) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines (perms=drwxr-xr-x)
	I0520 13:27:55.369043  624195 main.go:141] libmachine: (ha-170194) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube (perms=drwxr-xr-x)
	I0520 13:27:55.369057  624195 main.go:141] libmachine: (ha-170194) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525 (perms=drwxrwxr-x)
	I0520 13:27:55.369071  624195 main.go:141] libmachine: (ha-170194) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 13:27:55.369084  624195 main.go:141] libmachine: (ha-170194) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 13:27:55.369092  624195 main.go:141] libmachine: (ha-170194) Creating domain...
	I0520 13:27:55.370115  624195 main.go:141] libmachine: (ha-170194) define libvirt domain using xml: 
	I0520 13:27:55.370139  624195 main.go:141] libmachine: (ha-170194) <domain type='kvm'>
	I0520 13:27:55.370149  624195 main.go:141] libmachine: (ha-170194)   <name>ha-170194</name>
	I0520 13:27:55.370158  624195 main.go:141] libmachine: (ha-170194)   <memory unit='MiB'>2200</memory>
	I0520 13:27:55.370167  624195 main.go:141] libmachine: (ha-170194)   <vcpu>2</vcpu>
	I0520 13:27:55.370174  624195 main.go:141] libmachine: (ha-170194)   <features>
	I0520 13:27:55.370183  624195 main.go:141] libmachine: (ha-170194)     <acpi/>
	I0520 13:27:55.370190  624195 main.go:141] libmachine: (ha-170194)     <apic/>
	I0520 13:27:55.370200  624195 main.go:141] libmachine: (ha-170194)     <pae/>
	I0520 13:27:55.370208  624195 main.go:141] libmachine: (ha-170194)     
	I0520 13:27:55.370218  624195 main.go:141] libmachine: (ha-170194)   </features>
	I0520 13:27:55.370224  624195 main.go:141] libmachine: (ha-170194)   <cpu mode='host-passthrough'>
	I0520 13:27:55.370229  624195 main.go:141] libmachine: (ha-170194)   
	I0520 13:27:55.370237  624195 main.go:141] libmachine: (ha-170194)   </cpu>
	I0520 13:27:55.370244  624195 main.go:141] libmachine: (ha-170194)   <os>
	I0520 13:27:55.370252  624195 main.go:141] libmachine: (ha-170194)     <type>hvm</type>
	I0520 13:27:55.370308  624195 main.go:141] libmachine: (ha-170194)     <boot dev='cdrom'/>
	I0520 13:27:55.370344  624195 main.go:141] libmachine: (ha-170194)     <boot dev='hd'/>
	I0520 13:27:55.370384  624195 main.go:141] libmachine: (ha-170194)     <bootmenu enable='no'/>
	I0520 13:27:55.370412  624195 main.go:141] libmachine: (ha-170194)   </os>
	I0520 13:27:55.370422  624195 main.go:141] libmachine: (ha-170194)   <devices>
	I0520 13:27:55.370433  624195 main.go:141] libmachine: (ha-170194)     <disk type='file' device='cdrom'>
	I0520 13:27:55.370451  624195 main.go:141] libmachine: (ha-170194)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/boot2docker.iso'/>
	I0520 13:27:55.370464  624195 main.go:141] libmachine: (ha-170194)       <target dev='hdc' bus='scsi'/>
	I0520 13:27:55.370475  624195 main.go:141] libmachine: (ha-170194)       <readonly/>
	I0520 13:27:55.370489  624195 main.go:141] libmachine: (ha-170194)     </disk>
	I0520 13:27:55.370504  624195 main.go:141] libmachine: (ha-170194)     <disk type='file' device='disk'>
	I0520 13:27:55.370516  624195 main.go:141] libmachine: (ha-170194)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 13:27:55.370532  624195 main.go:141] libmachine: (ha-170194)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/ha-170194.rawdisk'/>
	I0520 13:27:55.370542  624195 main.go:141] libmachine: (ha-170194)       <target dev='hda' bus='virtio'/>
	I0520 13:27:55.370553  624195 main.go:141] libmachine: (ha-170194)     </disk>
	I0520 13:27:55.370567  624195 main.go:141] libmachine: (ha-170194)     <interface type='network'>
	I0520 13:27:55.370580  624195 main.go:141] libmachine: (ha-170194)       <source network='mk-ha-170194'/>
	I0520 13:27:55.370590  624195 main.go:141] libmachine: (ha-170194)       <model type='virtio'/>
	I0520 13:27:55.370602  624195 main.go:141] libmachine: (ha-170194)     </interface>
	I0520 13:27:55.370612  624195 main.go:141] libmachine: (ha-170194)     <interface type='network'>
	I0520 13:27:55.370623  624195 main.go:141] libmachine: (ha-170194)       <source network='default'/>
	I0520 13:27:55.370636  624195 main.go:141] libmachine: (ha-170194)       <model type='virtio'/>
	I0520 13:27:55.370647  624195 main.go:141] libmachine: (ha-170194)     </interface>
	I0520 13:27:55.370657  624195 main.go:141] libmachine: (ha-170194)     <serial type='pty'>
	I0520 13:27:55.370669  624195 main.go:141] libmachine: (ha-170194)       <target port='0'/>
	I0520 13:27:55.370678  624195 main.go:141] libmachine: (ha-170194)     </serial>
	I0520 13:27:55.370690  624195 main.go:141] libmachine: (ha-170194)     <console type='pty'>
	I0520 13:27:55.370701  624195 main.go:141] libmachine: (ha-170194)       <target type='serial' port='0'/>
	I0520 13:27:55.370711  624195 main.go:141] libmachine: (ha-170194)     </console>
	I0520 13:27:55.370721  624195 main.go:141] libmachine: (ha-170194)     <rng model='virtio'>
	I0520 13:27:55.370732  624195 main.go:141] libmachine: (ha-170194)       <backend model='random'>/dev/random</backend>
	I0520 13:27:55.370741  624195 main.go:141] libmachine: (ha-170194)     </rng>
	I0520 13:27:55.370749  624195 main.go:141] libmachine: (ha-170194)     
	I0520 13:27:55.370759  624195 main.go:141] libmachine: (ha-170194)     
	I0520 13:27:55.370767  624195 main.go:141] libmachine: (ha-170194)   </devices>
	I0520 13:27:55.370774  624195 main.go:141] libmachine: (ha-170194) </domain>
	I0520 13:27:55.370779  624195 main.go:141] libmachine: (ha-170194) 
	I0520 13:27:55.375705  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:d6:7b:51 in network default
	I0520 13:27:55.376247  624195 main.go:141] libmachine: (ha-170194) Ensuring networks are active...
	I0520 13:27:55.376271  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:27:55.376855  624195 main.go:141] libmachine: (ha-170194) Ensuring network default is active
	I0520 13:27:55.377222  624195 main.go:141] libmachine: (ha-170194) Ensuring network mk-ha-170194 is active
	I0520 13:27:55.377700  624195 main.go:141] libmachine: (ha-170194) Getting domain xml...
	I0520 13:27:55.378335  624195 main.go:141] libmachine: (ha-170194) Creating domain...
	I0520 13:27:56.557336  624195 main.go:141] libmachine: (ha-170194) Waiting to get IP...
	I0520 13:27:56.558101  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:27:56.558467  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:27:56.558559  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:56.558473  624219 retry.go:31] will retry after 230.582871ms: waiting for machine to come up
	I0520 13:27:56.790941  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:27:56.791484  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:27:56.791514  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:56.791443  624219 retry.go:31] will retry after 355.829641ms: waiting for machine to come up
	I0520 13:27:57.149070  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:27:57.149476  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:27:57.149502  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:57.149420  624219 retry.go:31] will retry after 344.241691ms: waiting for machine to come up
	I0520 13:27:57.494945  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:27:57.495413  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:27:57.495449  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:57.495342  624219 retry.go:31] will retry after 542.878171ms: waiting for machine to come up
	I0520 13:27:58.040037  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:27:58.040469  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:27:58.040498  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:58.040418  624219 retry.go:31] will retry after 500.259105ms: waiting for machine to come up
	I0520 13:27:58.542079  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:27:58.542505  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:27:58.542538  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:58.542436  624219 retry.go:31] will retry after 931.085496ms: waiting for machine to come up
	I0520 13:27:59.475499  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:27:59.475935  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:27:59.475975  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:27:59.475876  624219 retry.go:31] will retry after 721.553184ms: waiting for machine to come up
	I0520 13:28:00.199611  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:00.200101  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:28:00.200127  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:28:00.200042  624219 retry.go:31] will retry after 1.117618537s: waiting for machine to come up
	I0520 13:28:01.319380  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:01.319842  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:28:01.319873  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:28:01.319774  624219 retry.go:31] will retry after 1.394871155s: waiting for machine to come up
	I0520 13:28:02.717949  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:02.718384  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:28:02.718411  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:28:02.718352  624219 retry.go:31] will retry after 1.47499546s: waiting for machine to come up
	I0520 13:28:04.195297  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:04.195762  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:28:04.195792  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:28:04.195710  624219 retry.go:31] will retry after 1.787841557s: waiting for machine to come up
	I0520 13:28:05.985640  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:05.986161  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:28:05.986192  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:28:05.986100  624219 retry.go:31] will retry after 2.914900147s: waiting for machine to come up
	I0520 13:28:08.904215  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:08.904590  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:28:08.904609  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:28:08.904554  624219 retry.go:31] will retry after 3.774056973s: waiting for machine to come up
	I0520 13:28:12.682006  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:12.682480  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find current IP address of domain ha-170194 in network mk-ha-170194
	I0520 13:28:12.682506  624195 main.go:141] libmachine: (ha-170194) DBG | I0520 13:28:12.682442  624219 retry.go:31] will retry after 3.776735044s: waiting for machine to come up
	I0520 13:28:16.461298  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.461814  624195 main.go:141] libmachine: (ha-170194) Found IP for machine: 192.168.39.92
	I0520 13:28:16.461838  624195 main.go:141] libmachine: (ha-170194) Reserving static IP address...
	I0520 13:28:16.461851  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has current primary IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.462231  624195 main.go:141] libmachine: (ha-170194) DBG | unable to find host DHCP lease matching {name: "ha-170194", mac: "52:54:00:4b:8c:ad", ip: "192.168.39.92"} in network mk-ha-170194
	I0520 13:28:16.538038  624195 main.go:141] libmachine: (ha-170194) DBG | Getting to WaitForSSH function...
	I0520 13:28:16.538071  624195 main.go:141] libmachine: (ha-170194) Reserved static IP address: 192.168.39.92
	I0520 13:28:16.538086  624195 main.go:141] libmachine: (ha-170194) Waiting for SSH to be available...
	I0520 13:28:16.540602  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.541069  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:16.541291  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.541330  624195 main.go:141] libmachine: (ha-170194) DBG | Using SSH client type: external
	I0520 13:28:16.541352  624195 main.go:141] libmachine: (ha-170194) DBG | Using SSH private key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa (-rw-------)
	I0520 13:28:16.541378  624195 main.go:141] libmachine: (ha-170194) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 13:28:16.541391  624195 main.go:141] libmachine: (ha-170194) DBG | About to run SSH command:
	I0520 13:28:16.541400  624195 main.go:141] libmachine: (ha-170194) DBG | exit 0
	I0520 13:28:16.665187  624195 main.go:141] libmachine: (ha-170194) DBG | SSH cmd err, output: <nil>: 
	I0520 13:28:16.665497  624195 main.go:141] libmachine: (ha-170194) KVM machine creation complete!
	I0520 13:28:16.665853  624195 main.go:141] libmachine: (ha-170194) Calling .GetConfigRaw
	I0520 13:28:16.666420  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:16.666630  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:16.666784  624195 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 13:28:16.666796  624195 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:28:16.668190  624195 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 13:28:16.668223  624195 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 13:28:16.668256  624195 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 13:28:16.668268  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:16.670743  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.671161  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:16.671199  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.671275  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:16.671492  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:16.671653  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:16.671790  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:16.671964  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:28:16.672292  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:28:16.672311  624195 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 13:28:16.776402  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:28:16.776427  624195 main.go:141] libmachine: Detecting the provisioner...
	I0520 13:28:16.776437  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:16.779402  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.779733  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:16.779757  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.779919  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:16.780113  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:16.780297  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:16.780415  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:16.780543  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:28:16.780724  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:28:16.780739  624195 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 13:28:16.877834  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 13:28:16.877917  624195 main.go:141] libmachine: found compatible host: buildroot
	I0520 13:28:16.877928  624195 main.go:141] libmachine: Provisioning with buildroot...
	I0520 13:28:16.877942  624195 main.go:141] libmachine: (ha-170194) Calling .GetMachineName
	I0520 13:28:16.878207  624195 buildroot.go:166] provisioning hostname "ha-170194"
	I0520 13:28:16.878241  624195 main.go:141] libmachine: (ha-170194) Calling .GetMachineName
	I0520 13:28:16.878464  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:16.881126  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.881567  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:16.881600  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.881708  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:16.881988  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:16.882137  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:16.882325  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:16.882495  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:28:16.882655  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:28:16.882667  624195 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-170194 && echo "ha-170194" | sudo tee /etc/hostname
	I0520 13:28:16.994455  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-170194
	
	I0520 13:28:16.994496  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:16.997222  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.997580  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:16.997603  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:16.997774  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:16.997979  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:16.998167  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:16.998322  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:16.998500  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:28:16.998684  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:28:16.998701  624195 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-170194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-170194/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-170194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 13:28:17.105422  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:28:17.105475  624195 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 13:28:17.105546  624195 buildroot.go:174] setting up certificates
	I0520 13:28:17.105562  624195 provision.go:84] configureAuth start
	I0520 13:28:17.105583  624195 main.go:141] libmachine: (ha-170194) Calling .GetMachineName
	I0520 13:28:17.105931  624195 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:28:17.108932  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.109437  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.109468  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.109666  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:17.111911  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.112297  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.112327  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.112453  624195 provision.go:143] copyHostCerts
	I0520 13:28:17.112483  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:28:17.112519  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 13:28:17.112527  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:28:17.112590  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 13:28:17.112665  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:28:17.112682  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 13:28:17.112689  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:28:17.112710  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 13:28:17.112754  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:28:17.112771  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 13:28:17.112779  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:28:17.112799  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 13:28:17.112844  624195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.ha-170194 san=[127.0.0.1 192.168.39.92 ha-170194 localhost minikube]
	I0520 13:28:17.183043  624195 provision.go:177] copyRemoteCerts
	I0520 13:28:17.183101  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 13:28:17.183127  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:17.185798  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.186268  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.186301  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.186430  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:17.186625  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:17.186765  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:17.186891  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:28:17.263716  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 13:28:17.263792  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 13:28:17.286709  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 13:28:17.286771  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0520 13:28:17.310154  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 13:28:17.310216  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 13:28:17.333534  624195 provision.go:87] duration metric: took 227.950346ms to configureAuth
	I0520 13:28:17.333565  624195 buildroot.go:189] setting minikube options for container-runtime
	I0520 13:28:17.333791  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:28:17.333904  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:17.336564  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.336892  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.336917  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.337113  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:17.337336  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:17.337505  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:17.337629  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:17.337762  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:28:17.337920  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:28:17.337933  624195 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 13:28:17.582807  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 13:28:17.582837  624195 main.go:141] libmachine: Checking connection to Docker...
	I0520 13:28:17.582845  624195 main.go:141] libmachine: (ha-170194) Calling .GetURL
	I0520 13:28:17.584038  624195 main.go:141] libmachine: (ha-170194) DBG | Using libvirt version 6000000
	I0520 13:28:17.586091  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.586396  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.586423  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.586565  624195 main.go:141] libmachine: Docker is up and running!
	I0520 13:28:17.586579  624195 main.go:141] libmachine: Reticulating splines...
	I0520 13:28:17.586586  624195 client.go:171] duration metric: took 22.699301504s to LocalClient.Create
	I0520 13:28:17.586611  624195 start.go:167] duration metric: took 22.699379662s to libmachine.API.Create "ha-170194"
	I0520 13:28:17.586621  624195 start.go:293] postStartSetup for "ha-170194" (driver="kvm2")
	I0520 13:28:17.586642  624195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 13:28:17.586660  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:17.586894  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 13:28:17.586924  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:17.589115  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.589437  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.589477  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.589573  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:17.589745  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:17.589886  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:17.590044  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:28:17.667163  624195 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 13:28:17.671217  624195 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 13:28:17.671240  624195 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 13:28:17.671299  624195 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 13:28:17.671368  624195 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 13:28:17.671378  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /etc/ssl/certs/6098672.pem
	I0520 13:28:17.671466  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 13:28:17.680210  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:28:17.701507  624195 start.go:296] duration metric: took 114.864585ms for postStartSetup
	I0520 13:28:17.701571  624195 main.go:141] libmachine: (ha-170194) Calling .GetConfigRaw
	I0520 13:28:17.702151  624195 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:28:17.704863  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.705211  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.705239  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.705507  624195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:28:17.705727  624195 start.go:128] duration metric: took 22.837382587s to createHost
	I0520 13:28:17.705757  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:17.708076  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.708414  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.708442  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.708581  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:17.708782  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:17.708925  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:17.709049  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:17.709191  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:28:17.709391  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:28:17.709409  624195 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 13:28:17.805513  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716211697.764627381
	
	I0520 13:28:17.805540  624195 fix.go:216] guest clock: 1716211697.764627381
	I0520 13:28:17.805550  624195 fix.go:229] Guest: 2024-05-20 13:28:17.764627381 +0000 UTC Remote: 2024-05-20 13:28:17.705742423 +0000 UTC m=+22.952607324 (delta=58.884958ms)
	I0520 13:28:17.805576  624195 fix.go:200] guest clock delta is within tolerance: 58.884958ms
	I0520 13:28:17.805587  624195 start.go:83] releasing machines lock for "ha-170194", held for 22.937322256s
	I0520 13:28:17.805614  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:17.805884  624195 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:28:17.808403  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.808724  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.808754  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.808867  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:17.809445  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:17.809654  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:17.809756  624195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 13:28:17.809792  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:17.809916  624195 ssh_runner.go:195] Run: cat /version.json
	I0520 13:28:17.809941  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:17.812301  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.812371  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.812658  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.812688  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.812712  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:17.812726  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:17.812799  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:17.812933  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:17.813020  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:17.813052  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:17.813172  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:17.813265  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:17.813346  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:28:17.813430  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	W0520 13:28:17.885671  624195 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 13:28:17.885744  624195 ssh_runner.go:195] Run: systemctl --version
	I0520 13:28:17.920845  624195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 13:28:18.083087  624195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 13:28:18.089011  624195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 13:28:18.089074  624195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 13:28:18.104478  624195 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 13:28:18.104502  624195 start.go:494] detecting cgroup driver to use...
	I0520 13:28:18.104569  624195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 13:28:18.119192  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 13:28:18.131993  624195 docker.go:217] disabling cri-docker service (if available) ...
	I0520 13:28:18.132040  624195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 13:28:18.144764  624195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 13:28:18.157011  624195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 13:28:18.262539  624195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 13:28:18.390638  624195 docker.go:233] disabling docker service ...
	I0520 13:28:18.390720  624195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 13:28:18.403852  624195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 13:28:18.416113  624195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 13:28:18.549600  624195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 13:28:18.661232  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 13:28:18.674749  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 13:28:18.692146  624195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 13:28:18.692204  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:28:18.702249  624195 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 13:28:18.702328  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:28:18.712386  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:28:18.722412  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:28:18.732343  624195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 13:28:18.742653  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:28:18.752679  624195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:28:18.768314  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:28:18.777984  624195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 13:28:18.786436  624195 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 13:28:18.786490  624195 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 13:28:18.798583  624195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 13:28:18.807592  624195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:28:18.916620  624195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 13:28:19.050058  624195 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 13:28:19.050157  624195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 13:28:19.054483  624195 start.go:562] Will wait 60s for crictl version
	I0520 13:28:19.054545  624195 ssh_runner.go:195] Run: which crictl
	I0520 13:28:19.057926  624195 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 13:28:19.099892  624195 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 13:28:19.099978  624195 ssh_runner.go:195] Run: crio --version
	I0520 13:28:19.125482  624195 ssh_runner.go:195] Run: crio --version
	I0520 13:28:19.159649  624195 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 13:28:19.161634  624195 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:28:19.164355  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:19.164819  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:19.164848  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:19.165120  624195 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 13:28:19.169051  624195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:28:19.181358  624195 kubeadm.go:877] updating cluster {Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 13:28:19.181503  624195 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:28:19.181554  624195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:28:19.211681  624195 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 13:28:19.211751  624195 ssh_runner.go:195] Run: which lz4
	I0520 13:28:19.215344  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0520 13:28:19.215446  624195 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 13:28:19.219251  624195 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 13:28:19.219283  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 13:28:20.426993  624195 crio.go:462] duration metric: took 1.211579486s to copy over tarball
	I0520 13:28:20.427099  624195 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 13:28:22.481630  624195 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.054498561s)
	I0520 13:28:22.481659  624195 crio.go:469] duration metric: took 2.054633756s to extract the tarball
	I0520 13:28:22.481674  624195 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 13:28:22.517651  624195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:28:22.560937  624195 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:28:22.560962  624195 cache_images.go:84] Images are preloaded, skipping loading
	I0520 13:28:22.560970  624195 kubeadm.go:928] updating node { 192.168.39.92 8443 v1.30.1 crio true true} ...
	I0520 13:28:22.561099  624195 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-170194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:28:22.561189  624195 ssh_runner.go:195] Run: crio config
	I0520 13:28:22.613106  624195 cni.go:84] Creating CNI manager for ""
	I0520 13:28:22.613128  624195 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 13:28:22.613145  624195 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 13:28:22.613167  624195 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-170194 NodeName:ha-170194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 13:28:22.613321  624195 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-170194"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 13:28:22.613346  624195 kube-vip.go:115] generating kube-vip config ...
	I0520 13:28:22.613388  624195 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 13:28:22.628339  624195 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 13:28:22.628449  624195 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0520 13:28:22.628504  624195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 13:28:22.637629  624195 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 13:28:22.637716  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0520 13:28:22.646391  624195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0520 13:28:22.661041  624195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:28:22.675870  624195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0520 13:28:22.690568  624195 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0520 13:28:22.705009  624195 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 13:28:22.708356  624195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:28:22.719020  624195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:28:22.844563  624195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:28:22.860778  624195 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194 for IP: 192.168.39.92
	I0520 13:28:22.860798  624195 certs.go:194] generating shared ca certs ...
	I0520 13:28:22.860815  624195 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:22.860993  624195 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 13:28:22.861032  624195 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 13:28:22.861041  624195 certs.go:256] generating profile certs ...
	I0520 13:28:22.861099  624195 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key
	I0520 13:28:22.861117  624195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.crt with IP's: []
	I0520 13:28:22.962878  624195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.crt ...
	I0520 13:28:22.962909  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.crt: {Name:mk48839fa6f1275bc62052afea07d44900deb930 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:22.963083  624195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key ...
	I0520 13:28:22.963094  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key: {Name:mk204d14d925f8a71a8af7296551fc6ce490a267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:22.963169  624195 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.82000ede
	I0520 13:28:22.963185  624195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.82000ede with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.92 192.168.39.254]
	I0520 13:28:23.110370  624195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.82000ede ...
	I0520 13:28:23.110405  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.82000ede: {Name:mkd54c1e251ab37cbe185c1a0846b1344783525e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:23.110573  624195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.82000ede ...
	I0520 13:28:23.110587  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.82000ede: {Name:mk0c4673a459951ad3c1fb8b6a2bac8448ff4296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:23.110657  624195 certs.go:381] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.82000ede -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt
	I0520 13:28:23.110727  624195 certs.go:385] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.82000ede -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key
	I0520 13:28:23.110777  624195 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key
	I0520 13:28:23.110791  624195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt with IP's: []
	I0520 13:28:23.167318  624195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt ...
	I0520 13:28:23.167348  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt: {Name:mkefa0155ad99bbe313405324e43f6da286534a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:23.167497  624195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key ...
	I0520 13:28:23.167515  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key: {Name:mkcbec10ece7167813a11fb62a95789f2f93bd0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:23.167581  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 13:28:23.167597  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 13:28:23.167607  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 13:28:23.167621  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 13:28:23.167631  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 13:28:23.167641  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 13:28:23.167652  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 13:28:23.167662  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 13:28:23.167713  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem (1338 bytes)
	W0520 13:28:23.167745  624195 certs.go:480] ignoring /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867_empty.pem, impossibly tiny 0 bytes
	I0520 13:28:23.167754  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 13:28:23.167773  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 13:28:23.167848  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:28:23.167881  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 13:28:23.167927  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:28:23.167954  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem -> /usr/share/ca-certificates/609867.pem
	I0520 13:28:23.167967  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /usr/share/ca-certificates/6098672.pem
	I0520 13:28:23.167979  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:28:23.168541  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:28:23.192031  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:28:23.213025  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:28:23.233925  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 13:28:23.254734  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 13:28:23.275509  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 13:28:23.296244  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:28:23.317011  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 13:28:23.338012  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem --> /usr/share/ca-certificates/609867.pem (1338 bytes)
	I0520 13:28:23.358915  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /usr/share/ca-certificates/6098672.pem (1708 bytes)
	I0520 13:28:23.379411  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:28:23.399572  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 13:28:23.414145  624195 ssh_runner.go:195] Run: openssl version
	I0520 13:28:23.419405  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/609867.pem && ln -fs /usr/share/ca-certificates/609867.pem /etc/ssl/certs/609867.pem"
	I0520 13:28:23.428999  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/609867.pem
	I0520 13:28:23.433116  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 13:28:23.433173  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/609867.pem
	I0520 13:28:23.438564  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/609867.pem /etc/ssl/certs/51391683.0"
	I0520 13:28:23.448795  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6098672.pem && ln -fs /usr/share/ca-certificates/6098672.pem /etc/ssl/certs/6098672.pem"
	I0520 13:28:23.458868  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6098672.pem
	I0520 13:28:23.462994  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 13:28:23.463056  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6098672.pem
	I0520 13:28:23.468359  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6098672.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:28:23.478069  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:28:23.487827  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:28:23.491839  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:28:23.491887  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:28:23.496898  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:28:23.506612  624195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:28:23.510142  624195 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 13:28:23.510203  624195 kubeadm.go:391] StartCluster: {Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:28:23.510285  624195 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 13:28:23.510321  624195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 13:28:23.549729  624195 cri.go:89] found id: ""
	I0520 13:28:23.549807  624195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 13:28:23.559071  624195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 13:28:23.567994  624195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 13:28:23.576778  624195 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 13:28:23.576803  624195 kubeadm.go:156] found existing configuration files:
	
	I0520 13:28:23.576844  624195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 13:28:23.585105  624195 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 13:28:23.585153  624195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 13:28:23.594059  624195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 13:28:23.602846  624195 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 13:28:23.602916  624195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 13:28:23.612117  624195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 13:28:23.622726  624195 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 13:28:23.622796  624195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 13:28:23.631890  624195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 13:28:23.641364  624195 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 13:28:23.641420  624195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 13:28:23.649728  624195 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 13:28:23.753410  624195 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 13:28:23.753487  624195 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 13:28:23.863616  624195 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 13:28:23.863738  624195 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 13:28:23.863835  624195 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 13:28:24.063090  624195 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 13:28:24.095838  624195 out.go:204]   - Generating certificates and keys ...
	I0520 13:28:24.095982  624195 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 13:28:24.096085  624195 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 13:28:24.348072  624195 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 13:28:24.447420  624195 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 13:28:24.658729  624195 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 13:28:24.905241  624195 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 13:28:25.030560  624195 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 13:28:25.030781  624195 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-170194 localhost] and IPs [192.168.39.92 127.0.0.1 ::1]
	I0520 13:28:25.112572  624195 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 13:28:25.112787  624195 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-170194 localhost] and IPs [192.168.39.92 127.0.0.1 ::1]
	I0520 13:28:25.315895  624195 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 13:28:25.634467  624195 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 13:28:26.078695  624195 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 13:28:26.078923  624195 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 13:28:26.243887  624195 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 13:28:26.352281  624195 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 13:28:26.614181  624195 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 13:28:26.838217  624195 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 13:28:26.926318  624195 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 13:28:26.926883  624195 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 13:28:26.929498  624195 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 13:28:26.989435  624195 out.go:204]   - Booting up control plane ...
	I0520 13:28:26.989613  624195 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 13:28:26.989718  624195 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 13:28:26.989808  624195 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 13:28:26.989943  624195 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 13:28:26.990050  624195 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 13:28:26.990089  624195 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 13:28:27.081215  624195 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 13:28:27.081390  624195 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 13:28:27.582299  624195 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.301224ms
	I0520 13:28:27.582438  624195 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 13:28:33.554295  624195 kubeadm.go:309] [api-check] The API server is healthy after 5.971599406s
	I0520 13:28:33.575332  624195 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 13:28:33.597301  624195 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 13:28:33.632955  624195 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 13:28:33.633208  624195 kubeadm.go:309] [mark-control-plane] Marking the node ha-170194 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 13:28:33.646407  624195 kubeadm.go:309] [bootstrap-token] Using token: xxbnz9.veyzbo9bfh7fya27
	I0520 13:28:33.648795  624195 out.go:204]   - Configuring RBAC rules ...
	I0520 13:28:33.648922  624195 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 13:28:33.658191  624195 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 13:28:33.674301  624195 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 13:28:33.678034  624195 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 13:28:33.681836  624195 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 13:28:33.685596  624195 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 13:28:33.962504  624195 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 13:28:34.414749  624195 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 13:28:34.961323  624195 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 13:28:34.962213  624195 kubeadm.go:309] 
	I0520 13:28:34.962298  624195 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 13:28:34.962312  624195 kubeadm.go:309] 
	I0520 13:28:34.962389  624195 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 13:28:34.962404  624195 kubeadm.go:309] 
	I0520 13:28:34.962447  624195 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 13:28:34.962517  624195 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 13:28:34.962592  624195 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 13:28:34.962610  624195 kubeadm.go:309] 
	I0520 13:28:34.962665  624195 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 13:28:34.962671  624195 kubeadm.go:309] 
	I0520 13:28:34.962710  624195 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 13:28:34.962717  624195 kubeadm.go:309] 
	I0520 13:28:34.962769  624195 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 13:28:34.962844  624195 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 13:28:34.962906  624195 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 13:28:34.962912  624195 kubeadm.go:309] 
	I0520 13:28:34.962988  624195 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 13:28:34.963056  624195 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 13:28:34.963062  624195 kubeadm.go:309] 
	I0520 13:28:34.963130  624195 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token xxbnz9.veyzbo9bfh7fya27 \
	I0520 13:28:34.963216  624195 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa \
	I0520 13:28:34.963258  624195 kubeadm.go:309] 	--control-plane 
	I0520 13:28:34.963287  624195 kubeadm.go:309] 
	I0520 13:28:34.963403  624195 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 13:28:34.963416  624195 kubeadm.go:309] 
	I0520 13:28:34.963534  624195 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token xxbnz9.veyzbo9bfh7fya27 \
	I0520 13:28:34.963648  624195 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa 
	I0520 13:28:34.964372  624195 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 13:28:34.964410  624195 cni.go:84] Creating CNI manager for ""
	I0520 13:28:34.964423  624195 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 13:28:34.966911  624195 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 13:28:34.969080  624195 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 13:28:34.974261  624195 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 13:28:34.974279  624195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 13:28:34.992012  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 13:28:35.316278  624195 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 13:28:35.316385  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:35.316427  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-170194 minikube.k8s.io/updated_at=2024_05_20T13_28_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45 minikube.k8s.io/name=ha-170194 minikube.k8s.io/primary=true
	I0520 13:28:35.368776  624195 ops.go:34] apiserver oom_adj: -16
	I0520 13:28:35.524586  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:36.025341  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:36.524735  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:37.025186  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:37.524757  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:38.024640  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:38.524933  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:39.025039  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:39.524713  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:40.024950  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:40.524909  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:41.024670  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:41.524965  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:42.025369  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:42.524728  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:43.025393  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:43.524666  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:44.025387  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:44.524749  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:45.025662  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:45.525640  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:46.025012  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:46.525060  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:47.025660  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 13:28:47.175131  624195 kubeadm.go:1107] duration metric: took 11.858816146s to wait for elevateKubeSystemPrivileges
	W0520 13:28:47.175195  624195 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 13:28:47.175209  624195 kubeadm.go:393] duration metric: took 23.665011428s to StartCluster
	I0520 13:28:47.175236  624195 settings.go:142] acquiring lock: {Name:mkfd2b5ade573ba8a1562334a87bc71c9f86de47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:47.175354  624195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 13:28:47.176264  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/kubeconfig: {Name:mkf2ddc28a03db07b0a9823212c8450f3ae4cfa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:28:47.176545  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 13:28:47.176556  624195 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:28:47.176581  624195 start.go:240] waiting for startup goroutines ...
	I0520 13:28:47.176597  624195 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 13:28:47.176663  624195 addons.go:69] Setting storage-provisioner=true in profile "ha-170194"
	I0520 13:28:47.176683  624195 addons.go:69] Setting default-storageclass=true in profile "ha-170194"
	I0520 13:28:47.176742  624195 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-170194"
	I0520 13:28:47.176802  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:28:47.176700  624195 addons.go:234] Setting addon storage-provisioner=true in "ha-170194"
	I0520 13:28:47.176858  624195 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:28:47.177195  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:28:47.177227  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:28:47.177270  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:28:47.177310  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:28:47.193310  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33401
	I0520 13:28:47.193313  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43811
	I0520 13:28:47.193905  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:28:47.193914  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:28:47.194463  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:28:47.194485  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:28:47.194613  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:28:47.194638  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:28:47.194861  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:28:47.195042  624195 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:28:47.195083  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:28:47.195654  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:28:47.195686  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:28:47.197520  624195 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 13:28:47.197909  624195 kapi.go:59] client config for ha-170194: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.crt", KeyFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key", CAFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 13:28:47.198505  624195 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 13:28:47.198812  624195 addons.go:234] Setting addon default-storageclass=true in "ha-170194"
	I0520 13:28:47.198865  624195 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:28:47.199295  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:28:47.199352  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:28:47.211688  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33655
	I0520 13:28:47.212323  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:28:47.212951  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:28:47.212987  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:28:47.213361  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:28:47.213578  624195 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:28:47.214825  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45213
	I0520 13:28:47.215350  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:28:47.215904  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:28:47.215921  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:28:47.215971  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:47.219043  624195 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 13:28:47.216311  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:28:47.221534  624195 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 13:28:47.219767  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:28:47.221577  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:28:47.221601  624195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 13:28:47.221626  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:47.224993  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:47.225431  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:47.225469  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:47.225744  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:47.226002  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:47.226162  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:47.226304  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:28:47.238287  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46521
	I0520 13:28:47.238818  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:28:47.239375  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:28:47.239399  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:28:47.239826  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:28:47.240058  624195 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:28:47.241890  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:28:47.242136  624195 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 13:28:47.242151  624195 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 13:28:47.242165  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:28:47.245241  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:47.245728  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:28:47.245755  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:28:47.245953  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:28:47.246159  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:28:47.246321  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:28:47.246460  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:28:47.376266  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 13:28:47.391766  624195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 13:28:47.446364  624195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 13:28:48.072427  624195 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0520 13:28:48.072521  624195 main.go:141] libmachine: Making call to close driver server
	I0520 13:28:48.072546  624195 main.go:141] libmachine: (ha-170194) Calling .Close
	I0520 13:28:48.072873  624195 main.go:141] libmachine: Successfully made call to close driver server
	I0520 13:28:48.072890  624195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 13:28:48.072900  624195 main.go:141] libmachine: Making call to close driver server
	I0520 13:28:48.072907  624195 main.go:141] libmachine: (ha-170194) Calling .Close
	I0520 13:28:48.073149  624195 main.go:141] libmachine: Successfully made call to close driver server
	I0520 13:28:48.073163  624195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 13:28:48.073164  624195 main.go:141] libmachine: (ha-170194) DBG | Closing plugin on server side
	I0520 13:28:48.073339  624195 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0520 13:28:48.073353  624195 round_trippers.go:469] Request Headers:
	I0520 13:28:48.073364  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:28:48.073368  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:28:48.084371  624195 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 13:28:48.084940  624195 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 13:28:48.084972  624195 round_trippers.go:469] Request Headers:
	I0520 13:28:48.084981  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:28:48.084985  624195 round_trippers.go:473]     Content-Type: application/json
	I0520 13:28:48.084989  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:28:48.087741  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:28:48.087930  624195 main.go:141] libmachine: Making call to close driver server
	I0520 13:28:48.087949  624195 main.go:141] libmachine: (ha-170194) Calling .Close
	I0520 13:28:48.088254  624195 main.go:141] libmachine: Successfully made call to close driver server
	I0520 13:28:48.088275  624195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 13:28:48.088280  624195 main.go:141] libmachine: (ha-170194) DBG | Closing plugin on server side
	I0520 13:28:48.224085  624195 main.go:141] libmachine: Making call to close driver server
	I0520 13:28:48.224124  624195 main.go:141] libmachine: (ha-170194) Calling .Close
	I0520 13:28:48.224442  624195 main.go:141] libmachine: Successfully made call to close driver server
	I0520 13:28:48.224462  624195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 13:28:48.224471  624195 main.go:141] libmachine: Making call to close driver server
	I0520 13:28:48.224478  624195 main.go:141] libmachine: (ha-170194) Calling .Close
	I0520 13:28:48.224753  624195 main.go:141] libmachine: Successfully made call to close driver server
	I0520 13:28:48.224768  624195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 13:28:48.227624  624195 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0520 13:28:48.230004  624195 addons.go:505] duration metric: took 1.053400697s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0520 13:28:48.230056  624195 start.go:245] waiting for cluster config update ...
	I0520 13:28:48.230074  624195 start.go:254] writing updated cluster config ...
	I0520 13:28:48.232562  624195 out.go:177] 
	I0520 13:28:48.234985  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:28:48.235103  624195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:28:48.238491  624195 out.go:177] * Starting "ha-170194-m02" control-plane node in "ha-170194" cluster
	I0520 13:28:48.241389  624195 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:28:48.241422  624195 cache.go:56] Caching tarball of preloaded images
	I0520 13:28:48.241527  624195 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:28:48.241538  624195 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 13:28:48.241611  624195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:28:48.241786  624195 start.go:360] acquireMachinesLock for ha-170194-m02: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:28:48.241834  624195 start.go:364] duration metric: took 27.71µs to acquireMachinesLock for "ha-170194-m02"
	I0520 13:28:48.241853  624195 start.go:93] Provisioning new machine with config: &{Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:28:48.241937  624195 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0520 13:28:48.245208  624195 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 13:28:48.245349  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:28:48.245386  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:28:48.260813  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34619
	I0520 13:28:48.261287  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:28:48.261782  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:28:48.261811  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:28:48.262150  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:28:48.262362  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetMachineName
	I0520 13:28:48.262523  624195 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:28:48.262656  624195 start.go:159] libmachine.API.Create for "ha-170194" (driver="kvm2")
	I0520 13:28:48.262678  624195 client.go:168] LocalClient.Create starting
	I0520 13:28:48.262709  624195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem
	I0520 13:28:48.262742  624195 main.go:141] libmachine: Decoding PEM data...
	I0520 13:28:48.262756  624195 main.go:141] libmachine: Parsing certificate...
	I0520 13:28:48.262815  624195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem
	I0520 13:28:48.262832  624195 main.go:141] libmachine: Decoding PEM data...
	I0520 13:28:48.262844  624195 main.go:141] libmachine: Parsing certificate...
	I0520 13:28:48.262863  624195 main.go:141] libmachine: Running pre-create checks...
	I0520 13:28:48.262872  624195 main.go:141] libmachine: (ha-170194-m02) Calling .PreCreateCheck
	I0520 13:28:48.263019  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetConfigRaw
	I0520 13:28:48.263432  624195 main.go:141] libmachine: Creating machine...
	I0520 13:28:48.263445  624195 main.go:141] libmachine: (ha-170194-m02) Calling .Create
	I0520 13:28:48.263581  624195 main.go:141] libmachine: (ha-170194-m02) Creating KVM machine...
	I0520 13:28:48.264696  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found existing default KVM network
	I0520 13:28:48.264847  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found existing private KVM network mk-ha-170194
	I0520 13:28:48.264987  624195 main.go:141] libmachine: (ha-170194-m02) Setting up store path in /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02 ...
	I0520 13:28:48.265015  624195 main.go:141] libmachine: (ha-170194-m02) Building disk image from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 13:28:48.265084  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:48.264932  624571 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:28:48.265150  624195 main.go:141] libmachine: (ha-170194-m02) Downloading /home/jenkins/minikube-integration/18929-602525/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 13:28:48.520565  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:48.520428  624571 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa...
	I0520 13:28:48.688844  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:48.688694  624571 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/ha-170194-m02.rawdisk...
	I0520 13:28:48.688877  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Writing magic tar header
	I0520 13:28:48.688886  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Writing SSH key tar header
	I0520 13:28:48.688894  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:48.688814  624571 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02 ...
	I0520 13:28:48.688910  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02
	I0520 13:28:48.689001  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines
	I0520 13:28:48.689025  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:28:48.689039  624195 main.go:141] libmachine: (ha-170194-m02) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02 (perms=drwx------)
	I0520 13:28:48.689059  624195 main.go:141] libmachine: (ha-170194-m02) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines (perms=drwxr-xr-x)
	I0520 13:28:48.689074  624195 main.go:141] libmachine: (ha-170194-m02) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube (perms=drwxr-xr-x)
	I0520 13:28:48.689091  624195 main.go:141] libmachine: (ha-170194-m02) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525 (perms=drwxrwxr-x)
	I0520 13:28:48.689105  624195 main.go:141] libmachine: (ha-170194-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 13:28:48.689121  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525
	I0520 13:28:48.689135  624195 main.go:141] libmachine: (ha-170194-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 13:28:48.689149  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 13:28:48.689164  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Checking permissions on dir: /home/jenkins
	I0520 13:28:48.689184  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Checking permissions on dir: /home
	I0520 13:28:48.689204  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Skipping /home - not owner
	I0520 13:28:48.689222  624195 main.go:141] libmachine: (ha-170194-m02) Creating domain...
	I0520 13:28:48.690305  624195 main.go:141] libmachine: (ha-170194-m02) define libvirt domain using xml: 
	I0520 13:28:48.690327  624195 main.go:141] libmachine: (ha-170194-m02) <domain type='kvm'>
	I0520 13:28:48.690339  624195 main.go:141] libmachine: (ha-170194-m02)   <name>ha-170194-m02</name>
	I0520 13:28:48.690350  624195 main.go:141] libmachine: (ha-170194-m02)   <memory unit='MiB'>2200</memory>
	I0520 13:28:48.690356  624195 main.go:141] libmachine: (ha-170194-m02)   <vcpu>2</vcpu>
	I0520 13:28:48.690362  624195 main.go:141] libmachine: (ha-170194-m02)   <features>
	I0520 13:28:48.690370  624195 main.go:141] libmachine: (ha-170194-m02)     <acpi/>
	I0520 13:28:48.690376  624195 main.go:141] libmachine: (ha-170194-m02)     <apic/>
	I0520 13:28:48.690384  624195 main.go:141] libmachine: (ha-170194-m02)     <pae/>
	I0520 13:28:48.690391  624195 main.go:141] libmachine: (ha-170194-m02)     
	I0520 13:28:48.690399  624195 main.go:141] libmachine: (ha-170194-m02)   </features>
	I0520 13:28:48.690406  624195 main.go:141] libmachine: (ha-170194-m02)   <cpu mode='host-passthrough'>
	I0520 13:28:48.690415  624195 main.go:141] libmachine: (ha-170194-m02)   
	I0520 13:28:48.690419  624195 main.go:141] libmachine: (ha-170194-m02)   </cpu>
	I0520 13:28:48.690425  624195 main.go:141] libmachine: (ha-170194-m02)   <os>
	I0520 13:28:48.690429  624195 main.go:141] libmachine: (ha-170194-m02)     <type>hvm</type>
	I0520 13:28:48.690435  624195 main.go:141] libmachine: (ha-170194-m02)     <boot dev='cdrom'/>
	I0520 13:28:48.690441  624195 main.go:141] libmachine: (ha-170194-m02)     <boot dev='hd'/>
	I0520 13:28:48.690470  624195 main.go:141] libmachine: (ha-170194-m02)     <bootmenu enable='no'/>
	I0520 13:28:48.690490  624195 main.go:141] libmachine: (ha-170194-m02)   </os>
	I0520 13:28:48.690497  624195 main.go:141] libmachine: (ha-170194-m02)   <devices>
	I0520 13:28:48.690507  624195 main.go:141] libmachine: (ha-170194-m02)     <disk type='file' device='cdrom'>
	I0520 13:28:48.690546  624195 main.go:141] libmachine: (ha-170194-m02)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/boot2docker.iso'/>
	I0520 13:28:48.690575  624195 main.go:141] libmachine: (ha-170194-m02)       <target dev='hdc' bus='scsi'/>
	I0520 13:28:48.690589  624195 main.go:141] libmachine: (ha-170194-m02)       <readonly/>
	I0520 13:28:48.690601  624195 main.go:141] libmachine: (ha-170194-m02)     </disk>
	I0520 13:28:48.690613  624195 main.go:141] libmachine: (ha-170194-m02)     <disk type='file' device='disk'>
	I0520 13:28:48.690626  624195 main.go:141] libmachine: (ha-170194-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 13:28:48.690642  624195 main.go:141] libmachine: (ha-170194-m02)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/ha-170194-m02.rawdisk'/>
	I0520 13:28:48.690658  624195 main.go:141] libmachine: (ha-170194-m02)       <target dev='hda' bus='virtio'/>
	I0520 13:28:48.690669  624195 main.go:141] libmachine: (ha-170194-m02)     </disk>
	I0520 13:28:48.690684  624195 main.go:141] libmachine: (ha-170194-m02)     <interface type='network'>
	I0520 13:28:48.690697  624195 main.go:141] libmachine: (ha-170194-m02)       <source network='mk-ha-170194'/>
	I0520 13:28:48.690708  624195 main.go:141] libmachine: (ha-170194-m02)       <model type='virtio'/>
	I0520 13:28:48.690717  624195 main.go:141] libmachine: (ha-170194-m02)     </interface>
	I0520 13:28:48.690732  624195 main.go:141] libmachine: (ha-170194-m02)     <interface type='network'>
	I0520 13:28:48.690747  624195 main.go:141] libmachine: (ha-170194-m02)       <source network='default'/>
	I0520 13:28:48.690756  624195 main.go:141] libmachine: (ha-170194-m02)       <model type='virtio'/>
	I0520 13:28:48.690780  624195 main.go:141] libmachine: (ha-170194-m02)     </interface>
	I0520 13:28:48.690807  624195 main.go:141] libmachine: (ha-170194-m02)     <serial type='pty'>
	I0520 13:28:48.690829  624195 main.go:141] libmachine: (ha-170194-m02)       <target port='0'/>
	I0520 13:28:48.690846  624195 main.go:141] libmachine: (ha-170194-m02)     </serial>
	I0520 13:28:48.690864  624195 main.go:141] libmachine: (ha-170194-m02)     <console type='pty'>
	I0520 13:28:48.690883  624195 main.go:141] libmachine: (ha-170194-m02)       <target type='serial' port='0'/>
	I0520 13:28:48.690895  624195 main.go:141] libmachine: (ha-170194-m02)     </console>
	I0520 13:28:48.690903  624195 main.go:141] libmachine: (ha-170194-m02)     <rng model='virtio'>
	I0520 13:28:48.690913  624195 main.go:141] libmachine: (ha-170194-m02)       <backend model='random'>/dev/random</backend>
	I0520 13:28:48.690920  624195 main.go:141] libmachine: (ha-170194-m02)     </rng>
	I0520 13:28:48.690927  624195 main.go:141] libmachine: (ha-170194-m02)     
	I0520 13:28:48.690933  624195 main.go:141] libmachine: (ha-170194-m02)     
	I0520 13:28:48.690941  624195 main.go:141] libmachine: (ha-170194-m02)   </devices>
	I0520 13:28:48.690948  624195 main.go:141] libmachine: (ha-170194-m02) </domain>
	I0520 13:28:48.690964  624195 main.go:141] libmachine: (ha-170194-m02) 
	I0520 13:28:48.698862  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:30:22:2e in network default
	I0520 13:28:48.701066  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:48.701087  624195 main.go:141] libmachine: (ha-170194-m02) Ensuring networks are active...
	I0520 13:28:48.702076  624195 main.go:141] libmachine: (ha-170194-m02) Ensuring network default is active
	I0520 13:28:48.702469  624195 main.go:141] libmachine: (ha-170194-m02) Ensuring network mk-ha-170194 is active
	I0520 13:28:48.702848  624195 main.go:141] libmachine: (ha-170194-m02) Getting domain xml...
	I0520 13:28:48.703648  624195 main.go:141] libmachine: (ha-170194-m02) Creating domain...
	I0520 13:28:49.949646  624195 main.go:141] libmachine: (ha-170194-m02) Waiting to get IP...
	I0520 13:28:49.950506  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:49.950886  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:49.950952  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:49.950868  624571 retry.go:31] will retry after 260.432301ms: waiting for machine to come up
	I0520 13:28:50.213512  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:50.214024  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:50.214061  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:50.213958  624571 retry.go:31] will retry after 316.191611ms: waiting for machine to come up
	I0520 13:28:50.531590  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:50.532047  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:50.532079  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:50.531993  624571 retry.go:31] will retry after 469.182705ms: waiting for machine to come up
	I0520 13:28:51.002473  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:51.002920  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:51.002953  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:51.002870  624571 retry.go:31] will retry after 532.236669ms: waiting for machine to come up
	I0520 13:28:51.537274  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:51.537911  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:51.537940  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:51.537866  624571 retry.go:31] will retry after 469.464444ms: waiting for machine to come up
	I0520 13:28:52.008531  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:52.008968  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:52.008999  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:52.008925  624571 retry.go:31] will retry after 658.375912ms: waiting for machine to come up
	I0520 13:28:52.668762  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:52.669226  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:52.669269  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:52.669170  624571 retry.go:31] will retry after 1.046807109s: waiting for machine to come up
	I0520 13:28:53.718231  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:53.718626  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:53.718660  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:53.718578  624571 retry.go:31] will retry after 1.300389906s: waiting for machine to come up
	I0520 13:28:55.021098  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:55.021668  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:55.021697  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:55.021614  624571 retry.go:31] will retry after 1.666445023s: waiting for machine to come up
	I0520 13:28:56.690683  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:56.691224  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:56.691248  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:56.691175  624571 retry.go:31] will retry after 1.6710471s: waiting for machine to come up
	I0520 13:28:58.364546  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:28:58.365756  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:28:58.365794  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:28:58.365607  624571 retry.go:31] will retry after 1.861117457s: waiting for machine to come up
	I0520 13:29:00.229815  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:00.230274  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:29:00.230302  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:29:00.230229  624571 retry.go:31] will retry after 2.215945961s: waiting for machine to come up
	I0520 13:29:02.448575  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:02.448999  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:29:02.449028  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:29:02.448936  624571 retry.go:31] will retry after 3.796039161s: waiting for machine to come up
	I0520 13:29:06.247888  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:06.248421  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find current IP address of domain ha-170194-m02 in network mk-ha-170194
	I0520 13:29:06.248454  624195 main.go:141] libmachine: (ha-170194-m02) DBG | I0520 13:29:06.248359  624571 retry.go:31] will retry after 3.504798848s: waiting for machine to come up
	I0520 13:29:09.755718  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:09.756305  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has current primary IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:09.756326  624195 main.go:141] libmachine: (ha-170194-m02) Found IP for machine: 192.168.39.155
	I0520 13:29:09.756337  624195 main.go:141] libmachine: (ha-170194-m02) Reserving static IP address...
	I0520 13:29:09.756702  624195 main.go:141] libmachine: (ha-170194-m02) DBG | unable to find host DHCP lease matching {name: "ha-170194-m02", mac: "52:54:00:3b:bd:91", ip: "192.168.39.155"} in network mk-ha-170194
	I0520 13:29:09.837735  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Getting to WaitForSSH function...
	I0520 13:29:09.837769  624195 main.go:141] libmachine: (ha-170194-m02) Reserved static IP address: 192.168.39.155
	I0520 13:29:09.837790  624195 main.go:141] libmachine: (ha-170194-m02) Waiting for SSH to be available...
	I0520 13:29:09.840897  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:09.841394  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:09.841425  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:09.841636  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Using SSH client type: external
	I0520 13:29:09.841662  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa (-rw-------)
	I0520 13:29:09.841696  624195 main.go:141] libmachine: (ha-170194-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.155 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 13:29:09.841709  624195 main.go:141] libmachine: (ha-170194-m02) DBG | About to run SSH command:
	I0520 13:29:09.841721  624195 main.go:141] libmachine: (ha-170194-m02) DBG | exit 0
	I0520 13:29:09.965601  624195 main.go:141] libmachine: (ha-170194-m02) DBG | SSH cmd err, output: <nil>: 
	I0520 13:29:09.965913  624195 main.go:141] libmachine: (ha-170194-m02) KVM machine creation complete!
	I0520 13:29:09.966212  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetConfigRaw
	I0520 13:29:09.966833  624195 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:29:09.967078  624195 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:29:09.967296  624195 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 13:29:09.967314  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetState
	I0520 13:29:09.968735  624195 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 13:29:09.968754  624195 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 13:29:09.968761  624195 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 13:29:09.968769  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:09.971729  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:09.972179  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:09.972217  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:09.972452  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:09.972642  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:09.972850  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:09.973010  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:09.973225  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:09.973538  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0520 13:29:09.973556  624195 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 13:29:10.072679  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:29:10.072712  624195 main.go:141] libmachine: Detecting the provisioner...
	I0520 13:29:10.072724  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:10.075775  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.076221  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:10.076250  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.076477  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:10.076738  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.076901  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.077051  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:10.077207  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:10.077401  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0520 13:29:10.077413  624195 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 13:29:10.177956  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 13:29:10.178073  624195 main.go:141] libmachine: found compatible host: buildroot
	I0520 13:29:10.178083  624195 main.go:141] libmachine: Provisioning with buildroot...
	I0520 13:29:10.178091  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetMachineName
	I0520 13:29:10.178395  624195 buildroot.go:166] provisioning hostname "ha-170194-m02"
	I0520 13:29:10.178433  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetMachineName
	I0520 13:29:10.178702  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:10.181773  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.182140  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:10.182174  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.182345  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:10.182574  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.182736  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.182904  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:10.183077  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:10.183262  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0520 13:29:10.183288  624195 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-170194-m02 && echo "ha-170194-m02" | sudo tee /etc/hostname
	I0520 13:29:10.296106  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-170194-m02
	
	I0520 13:29:10.296139  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:10.299063  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.299448  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:10.299472  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.299639  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:10.299875  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.300053  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.300212  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:10.300350  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:10.300553  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0520 13:29:10.300577  624195 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-170194-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-170194-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-170194-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 13:29:10.405448  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:29:10.405492  624195 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 13:29:10.405509  624195 buildroot.go:174] setting up certificates
	I0520 13:29:10.405519  624195 provision.go:84] configureAuth start
	I0520 13:29:10.405529  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetMachineName
	I0520 13:29:10.405831  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetIP
	I0520 13:29:10.408379  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.408720  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:10.408747  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.408876  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:10.411430  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.411759  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:10.411790  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.411901  624195 provision.go:143] copyHostCerts
	I0520 13:29:10.411938  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:29:10.411974  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 13:29:10.411984  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:29:10.412057  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 13:29:10.412171  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:29:10.412197  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 13:29:10.412206  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:29:10.412247  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 13:29:10.412313  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:29:10.412336  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 13:29:10.412342  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:29:10.412376  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 13:29:10.412442  624195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.ha-170194-m02 san=[127.0.0.1 192.168.39.155 ha-170194-m02 localhost minikube]
	I0520 13:29:10.629236  624195 provision.go:177] copyRemoteCerts
	I0520 13:29:10.629318  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 13:29:10.629350  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:10.631891  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.632207  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:10.632244  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.632401  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:10.632626  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.632795  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:10.632921  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	I0520 13:29:10.711236  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 13:29:10.711305  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 13:29:10.738073  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 13:29:10.738147  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 13:29:10.763816  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 13:29:10.763902  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 13:29:10.787048  624195 provision.go:87] duration metric: took 381.511669ms to configureAuth
	I0520 13:29:10.787090  624195 buildroot.go:189] setting minikube options for container-runtime
	I0520 13:29:10.787327  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:29:10.787453  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:10.790246  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.790624  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:10.790656  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:10.790829  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:10.791053  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.791201  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:10.791319  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:10.791479  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:10.791733  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0520 13:29:10.791759  624195 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 13:29:11.046719  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 13:29:11.046758  624195 main.go:141] libmachine: Checking connection to Docker...
	I0520 13:29:11.046771  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetURL
	I0520 13:29:11.048372  624195 main.go:141] libmachine: (ha-170194-m02) DBG | Using libvirt version 6000000
	I0520 13:29:11.051077  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.051434  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:11.051469  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.051704  624195 main.go:141] libmachine: Docker is up and running!
	I0520 13:29:11.051721  624195 main.go:141] libmachine: Reticulating splines...
	I0520 13:29:11.051728  624195 client.go:171] duration metric: took 22.789040995s to LocalClient.Create
	I0520 13:29:11.051755  624195 start.go:167] duration metric: took 22.789100264s to libmachine.API.Create "ha-170194"
	I0520 13:29:11.051764  624195 start.go:293] postStartSetup for "ha-170194-m02" (driver="kvm2")
	I0520 13:29:11.051774  624195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 13:29:11.051791  624195 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:29:11.052036  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 13:29:11.052069  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:11.054471  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.054862  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:11.054887  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.055044  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:11.055243  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:11.055422  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:11.055595  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	I0520 13:29:11.136114  624195 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 13:29:11.140174  624195 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 13:29:11.140213  624195 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 13:29:11.140301  624195 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 13:29:11.140371  624195 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 13:29:11.140383  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /etc/ssl/certs/6098672.pem
	I0520 13:29:11.140461  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 13:29:11.149831  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:29:11.171949  624195 start.go:296] duration metric: took 120.169054ms for postStartSetup
	I0520 13:29:11.172005  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetConfigRaw
	I0520 13:29:11.172773  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetIP
	I0520 13:29:11.175414  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.175819  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:11.175852  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.176071  624195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:29:11.176325  624195 start.go:128] duration metric: took 22.934372346s to createHost
	I0520 13:29:11.176357  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:11.178710  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.179119  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:11.179154  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.179317  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:11.179558  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:11.179729  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:11.179903  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:11.180098  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:11.180265  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I0520 13:29:11.180275  624195 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 13:29:11.277928  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716211751.253075568
	
	I0520 13:29:11.277952  624195 fix.go:216] guest clock: 1716211751.253075568
	I0520 13:29:11.277960  624195 fix.go:229] Guest: 2024-05-20 13:29:11.253075568 +0000 UTC Remote: 2024-05-20 13:29:11.176341982 +0000 UTC m=+76.423206883 (delta=76.733586ms)
	I0520 13:29:11.277976  624195 fix.go:200] guest clock delta is within tolerance: 76.733586ms
	I0520 13:29:11.277980  624195 start.go:83] releasing machines lock for "ha-170194-m02", held for 23.036137695s
	I0520 13:29:11.278004  624195 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:29:11.278289  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetIP
	I0520 13:29:11.280962  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.281421  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:11.281445  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.284797  624195 out.go:177] * Found network options:
	I0520 13:29:11.287145  624195 out.go:177]   - NO_PROXY=192.168.39.92
	W0520 13:29:11.289241  624195 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 13:29:11.289291  624195 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:29:11.289930  624195 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:29:11.290129  624195 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:29:11.290238  624195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 13:29:11.290292  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	W0520 13:29:11.290364  624195 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 13:29:11.290445  624195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 13:29:11.290464  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:29:11.293198  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.293364  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.293607  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:11.293636  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.293736  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:11.293755  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:11.293781  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:11.293920  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:29:11.293931  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:11.294120  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:11.294154  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:29:11.294313  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:29:11.294304  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	I0520 13:29:11.294467  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	I0520 13:29:11.528028  624195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 13:29:11.534156  624195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 13:29:11.534243  624195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 13:29:11.550155  624195 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 13:29:11.550182  624195 start.go:494] detecting cgroup driver to use...
	I0520 13:29:11.550269  624195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 13:29:11.566853  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 13:29:11.579779  624195 docker.go:217] disabling cri-docker service (if available) ...
	I0520 13:29:11.579853  624195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 13:29:11.593518  624195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 13:29:11.607644  624195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 13:29:11.729618  624195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 13:29:11.883567  624195 docker.go:233] disabling docker service ...
	I0520 13:29:11.883664  624195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 13:29:11.897860  624195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 13:29:11.911395  624195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 13:29:12.036291  624195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 13:29:12.155265  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 13:29:12.169239  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 13:29:12.187705  624195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 13:29:12.187768  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:12.197624  624195 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 13:29:12.197739  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:12.207577  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:12.217206  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:12.227532  624195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 13:29:12.237505  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:12.247577  624195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:12.264555  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:12.275960  624195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 13:29:12.285127  624195 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 13:29:12.285192  624195 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 13:29:12.299122  624195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 13:29:12.309316  624195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:29:12.438337  624195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 13:29:12.601443  624195 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 13:29:12.601522  624195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 13:29:12.606203  624195 start.go:562] Will wait 60s for crictl version
	I0520 13:29:12.606294  624195 ssh_runner.go:195] Run: which crictl
	I0520 13:29:12.609877  624195 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 13:29:12.646713  624195 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 13:29:12.646819  624195 ssh_runner.go:195] Run: crio --version
	I0520 13:29:12.672438  624195 ssh_runner.go:195] Run: crio --version
	I0520 13:29:12.700788  624195 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 13:29:12.703097  624195 out.go:177]   - env NO_PROXY=192.168.39.92
	I0520 13:29:12.705052  624195 main.go:141] libmachine: (ha-170194-m02) Calling .GetIP
	I0520 13:29:12.707544  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:12.707858  624195 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:02 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:29:12.707886  624195 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:29:12.708132  624195 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 13:29:12.712686  624195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:29:12.724807  624195 mustload.go:65] Loading cluster: ha-170194
	I0520 13:29:12.725080  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:29:12.725514  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:29:12.725551  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:29:12.740541  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35499
	I0520 13:29:12.741019  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:29:12.741564  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:29:12.741586  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:29:12.741966  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:29:12.742203  624195 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:29:12.743919  624195 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:29:12.744205  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:29:12.744245  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:29:12.759438  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33719
	I0520 13:29:12.759829  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:29:12.760261  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:29:12.760286  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:29:12.760610  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:29:12.760795  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:29:12.760964  624195 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194 for IP: 192.168.39.155
	I0520 13:29:12.760976  624195 certs.go:194] generating shared ca certs ...
	I0520 13:29:12.760988  624195 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:29:12.761132  624195 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 13:29:12.761173  624195 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 13:29:12.761183  624195 certs.go:256] generating profile certs ...
	I0520 13:29:12.761288  624195 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key
	I0520 13:29:12.761319  624195 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.109262c8
	I0520 13:29:12.761335  624195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.109262c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.92 192.168.39.155 192.168.39.254]
	I0520 13:29:13.038501  624195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.109262c8 ...
	I0520 13:29:13.038539  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.109262c8: {Name:mkdf5eaf058ef04410571d3595f24432d4e719c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:29:13.038742  624195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.109262c8 ...
	I0520 13:29:13.038764  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.109262c8: {Name:mkc7f28c6cc13ab984446cb2344b9f6ccaeae860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:29:13.038864  624195 certs.go:381] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.109262c8 -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt
	I0520 13:29:13.039011  624195 certs.go:385] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.109262c8 -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key
	I0520 13:29:13.039154  624195 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key
	I0520 13:29:13.039171  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 13:29:13.039185  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 13:29:13.039199  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 13:29:13.039214  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 13:29:13.039226  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 13:29:13.039240  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 13:29:13.039253  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 13:29:13.039266  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 13:29:13.039314  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem (1338 bytes)
	W0520 13:29:13.039342  624195 certs.go:480] ignoring /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867_empty.pem, impossibly tiny 0 bytes
	I0520 13:29:13.039352  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 13:29:13.039374  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 13:29:13.039396  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:29:13.039417  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 13:29:13.039452  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:29:13.039477  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:29:13.039491  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem -> /usr/share/ca-certificates/609867.pem
	I0520 13:29:13.039503  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /usr/share/ca-certificates/6098672.pem
	I0520 13:29:13.039539  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:29:13.042885  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:29:13.043259  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:29:13.043295  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:29:13.043497  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:29:13.043747  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:29:13.043928  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:29:13.044066  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:29:13.113643  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0520 13:29:13.118570  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0520 13:29:13.129028  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0520 13:29:13.133684  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0520 13:29:13.145361  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0520 13:29:13.149303  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0520 13:29:13.159094  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0520 13:29:13.162868  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0520 13:29:13.174323  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0520 13:29:13.178516  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0520 13:29:13.190302  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0520 13:29:13.194275  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0520 13:29:13.204637  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:29:13.229934  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:29:13.252924  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:29:13.276381  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 13:29:13.298860  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0520 13:29:13.321647  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 13:29:13.344077  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:29:13.366579  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 13:29:13.388680  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:29:13.411437  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem --> /usr/share/ca-certificates/609867.pem (1338 bytes)
	I0520 13:29:13.434477  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /usr/share/ca-certificates/6098672.pem (1708 bytes)
	I0520 13:29:13.457435  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0520 13:29:13.473705  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0520 13:29:13.489456  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0520 13:29:13.505008  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0520 13:29:13.520979  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0520 13:29:13.537111  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0520 13:29:13.553088  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0520 13:29:13.568524  624195 ssh_runner.go:195] Run: openssl version
	I0520 13:29:13.573813  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:29:13.583657  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:29:13.587625  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:29:13.587682  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:29:13.593610  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:29:13.604353  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/609867.pem && ln -fs /usr/share/ca-certificates/609867.pem /etc/ssl/certs/609867.pem"
	I0520 13:29:13.614642  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/609867.pem
	I0520 13:29:13.619003  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 13:29:13.619076  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/609867.pem
	I0520 13:29:13.624541  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/609867.pem /etc/ssl/certs/51391683.0"
	I0520 13:29:13.635084  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6098672.pem && ln -fs /usr/share/ca-certificates/6098672.pem /etc/ssl/certs/6098672.pem"
	I0520 13:29:13.645804  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6098672.pem
	I0520 13:29:13.650072  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 13:29:13.650128  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6098672.pem
	I0520 13:29:13.655544  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6098672.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:29:13.667113  624195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:29:13.670999  624195 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 13:29:13.671057  624195 kubeadm.go:928] updating node {m02 192.168.39.155 8443 v1.30.1 crio true true} ...
	I0520 13:29:13.671171  624195 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-170194-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.155
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:29:13.671203  624195 kube-vip.go:115] generating kube-vip config ...
	I0520 13:29:13.671235  624195 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 13:29:13.689752  624195 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 13:29:13.689823  624195 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 13:29:13.689876  624195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 13:29:13.700970  624195 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 13:29:13.701043  624195 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 13:29:13.712823  624195 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0520 13:29:13.712860  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 13:29:13.712937  624195 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 13:29:13.712953  624195 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0520 13:29:13.713018  624195 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0520 13:29:13.717407  624195 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 13:29:13.717438  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 13:29:19.375978  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 13:29:19.376066  624195 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 13:29:19.380820  624195 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 13:29:19.380859  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 13:29:24.665771  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:29:24.680483  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 13:29:24.680601  624195 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 13:29:24.685128  624195 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 13:29:24.685175  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 13:29:25.072572  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0520 13:29:25.082547  624195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0520 13:29:25.098548  624195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:29:25.114441  624195 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 13:29:25.130086  624195 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 13:29:25.133721  624195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:29:25.145122  624195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:29:25.261442  624195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:29:25.277748  624195 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:29:25.278120  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:29:25.278189  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:29:25.293754  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I0520 13:29:25.294332  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:29:25.294933  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:29:25.294960  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:29:25.295355  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:29:25.295603  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:29:25.295778  624195 start.go:316] joinCluster: &{Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:29:25.295879  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0520 13:29:25.295898  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:29:25.299469  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:29:25.300086  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:29:25.300111  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:29:25.300359  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:29:25.300583  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:29:25.300783  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:29:25.300986  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:29:25.459639  624195 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:29:25.459720  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u17l93.2jyx28d5o2okpqwi --discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-170194-m02 --control-plane --apiserver-advertise-address=192.168.39.155 --apiserver-bind-port=8443"
	I0520 13:29:48.282388  624195 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u17l93.2jyx28d5o2okpqwi --discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-170194-m02 --control-plane --apiserver-advertise-address=192.168.39.155 --apiserver-bind-port=8443": (22.822635673s)
	I0520 13:29:48.282441  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0520 13:29:48.786469  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-170194-m02 minikube.k8s.io/updated_at=2024_05_20T13_29_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45 minikube.k8s.io/name=ha-170194 minikube.k8s.io/primary=false
	I0520 13:29:48.922273  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-170194-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0520 13:29:49.030083  624195 start.go:318] duration metric: took 23.734298138s to joinCluster
	I0520 13:29:49.030205  624195 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:29:49.032631  624195 out.go:177] * Verifying Kubernetes components...
	I0520 13:29:49.030513  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:29:49.035244  624195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:29:49.315891  624195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:29:49.348786  624195 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 13:29:49.349119  624195 kapi.go:59] client config for ha-170194: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.crt", KeyFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key", CAFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0520 13:29:49.349218  624195 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.92:8443
	I0520 13:29:49.349484  624195 node_ready.go:35] waiting up to 6m0s for node "ha-170194-m02" to be "Ready" ...
	I0520 13:29:49.349577  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:49.349585  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:49.349593  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:49.349596  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:49.359468  624195 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0520 13:29:49.849934  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:49.849961  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:49.849971  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:49.849975  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:49.855145  624195 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 13:29:50.350315  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:50.350348  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:50.350362  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:50.350369  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:50.355497  624195 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 13:29:50.849881  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:50.849913  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:50.849925  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:50.849930  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:50.853109  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:51.350010  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:51.350033  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:51.350041  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:51.350045  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:51.352871  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:51.353399  624195 node_ready.go:53] node "ha-170194-m02" has status "Ready":"False"
	I0520 13:29:51.850494  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:51.850519  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:51.850527  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:51.850532  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:51.853666  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:52.350617  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:52.350644  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:52.350655  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:52.350659  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:52.390069  624195 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0520 13:29:52.850447  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:52.850474  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:52.850486  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:52.850494  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:52.853650  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:53.349939  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:53.349966  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:53.349975  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:53.349980  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:53.353696  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:53.354214  624195 node_ready.go:53] node "ha-170194-m02" has status "Ready":"False"
	I0520 13:29:53.850573  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:53.850604  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:53.850616  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:53.850623  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:53.854185  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:54.350160  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:54.350186  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:54.350198  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:54.350205  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:54.354161  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:54.850115  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:54.850146  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:54.850156  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:54.850164  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:54.854077  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:55.349986  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:55.350013  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:55.350025  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:55.350033  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:55.353457  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:55.850027  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:55.850050  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:55.850058  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:55.850062  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:55.853733  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:55.854520  624195 node_ready.go:53] node "ha-170194-m02" has status "Ready":"False"
	I0520 13:29:56.349897  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:56.349926  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.349934  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.349937  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.353540  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:56.354372  624195 node_ready.go:49] node "ha-170194-m02" has status "Ready":"True"
	I0520 13:29:56.354396  624195 node_ready.go:38] duration metric: took 7.004890219s for node "ha-170194-m02" to be "Ready" ...
	I0520 13:29:56.354409  624195 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:29:56.354495  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:29:56.354509  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.354520  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.354527  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.359557  624195 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 13:29:56.365455  624195 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-s28r6" in "kube-system" namespace to be "Ready" ...
	I0520 13:29:56.365593  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-s28r6
	I0520 13:29:56.365607  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.365618  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.365626  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.369109  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:56.369824  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:29:56.369843  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.369852  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.369856  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.372341  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:56.373106  624195 pod_ready.go:92] pod "coredns-7db6d8ff4d-s28r6" in "kube-system" namespace has status "Ready":"True"
	I0520 13:29:56.373128  624195 pod_ready.go:81] duration metric: took 7.643435ms for pod "coredns-7db6d8ff4d-s28r6" in "kube-system" namespace to be "Ready" ...
	I0520 13:29:56.373140  624195 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vk78q" in "kube-system" namespace to be "Ready" ...
	I0520 13:29:56.373218  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vk78q
	I0520 13:29:56.373229  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.373239  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.373272  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.375884  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:56.376473  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:29:56.376487  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.376493  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.376941  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.380502  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:56.380962  624195 pod_ready.go:92] pod "coredns-7db6d8ff4d-vk78q" in "kube-system" namespace has status "Ready":"True"
	I0520 13:29:56.380987  624195 pod_ready.go:81] duration metric: took 7.835879ms for pod "coredns-7db6d8ff4d-vk78q" in "kube-system" namespace to be "Ready" ...
	I0520 13:29:56.380998  624195 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:29:56.381057  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194
	I0520 13:29:56.381065  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.381072  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.381079  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.383534  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:56.384029  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:29:56.384043  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.384050  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.384054  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.386238  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:56.386702  624195 pod_ready.go:92] pod "etcd-ha-170194" in "kube-system" namespace has status "Ready":"True"
	I0520 13:29:56.386723  624195 pod_ready.go:81] duration metric: took 5.714217ms for pod "etcd-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:29:56.386731  624195 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:29:56.386782  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:29:56.386790  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.386796  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.386799  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.389074  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:56.389693  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:56.389709  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.389720  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.389724  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.391967  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:56.888044  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:29:56.888083  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.888102  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.888111  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.892011  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:56.892734  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:56.892754  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:56.892764  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:56.892769  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:56.895694  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:57.387037  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:29:57.387073  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:57.387082  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:57.387087  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:57.391196  624195 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 13:29:57.392147  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:57.392165  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:57.392173  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:57.392176  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:57.395968  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:57.887577  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:29:57.887607  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:57.887616  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:57.887620  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:57.891376  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:57.892031  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:57.892050  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:57.892057  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:57.892061  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:57.894823  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:58.387075  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:29:58.387101  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:58.387109  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:58.387113  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:58.391171  624195 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 13:29:58.392063  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:58.392084  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:58.392094  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:58.392100  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:58.395375  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:58.395926  624195 pod_ready.go:102] pod "etcd-ha-170194-m02" in "kube-system" namespace has status "Ready":"False"
	I0520 13:29:58.887336  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:29:58.887365  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:58.887375  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:58.887379  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:58.890962  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:58.891646  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:58.891668  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:58.891676  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:58.891680  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:58.894580  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:29:59.387408  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:29:59.387434  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:59.387441  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:59.387444  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:59.391372  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:59.391902  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:59.391918  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:59.391925  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:59.391929  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:59.395200  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:59.887604  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:29:59.887632  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:59.887640  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:59.887643  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:59.890788  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:29:59.891549  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:29:59.891569  624195 round_trippers.go:469] Request Headers:
	I0520 13:29:59.891582  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:29:59.891587  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:29:59.894490  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:00.387384  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:30:00.387409  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:00.387418  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:00.387422  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:00.391021  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:00.391902  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:00.391922  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:00.391929  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:00.391933  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:00.394751  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:00.887686  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:30:00.887713  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:00.887721  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:00.887725  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:00.890982  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:00.891621  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:00.891636  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:00.891643  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:00.891647  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:00.894420  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:00.894843  624195 pod_ready.go:102] pod "etcd-ha-170194-m02" in "kube-system" namespace has status "Ready":"False"
	I0520 13:30:01.387197  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:30:01.387224  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:01.387234  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:01.387237  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:01.390720  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:01.391270  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:01.391284  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:01.391291  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:01.391295  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:01.393881  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:01.887956  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:30:01.887999  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:01.888009  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:01.888014  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:01.891512  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:01.892189  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:01.892206  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:01.892214  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:01.892217  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:01.895080  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:02.387046  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:30:02.387082  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.387091  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.387095  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.390493  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:02.391270  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:02.391293  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.391302  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.391311  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.394529  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:02.887590  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:30:02.887620  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.887633  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.887639  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.891110  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:02.891791  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:02.891809  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.891815  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.891818  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.894764  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:02.895304  624195 pod_ready.go:92] pod "etcd-ha-170194-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:02.895325  624195 pod_ready.go:81] duration metric: took 6.508587194s for pod "etcd-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:02.895340  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:02.895404  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170194
	I0520 13:30:02.895411  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.895417  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.895423  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.897809  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:02.898741  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:30:02.898760  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.898771  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.898776  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.901124  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:02.901607  624195 pod_ready.go:92] pod "kube-apiserver-ha-170194" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:02.901627  624195 pod_ready.go:81] duration metric: took 6.278538ms for pod "kube-apiserver-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:02.901637  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:02.901689  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170194-m02
	I0520 13:30:02.901697  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.901704  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.901709  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.904714  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:02.905672  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:02.905690  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.905703  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.905709  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.908693  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:02.909421  624195 pod_ready.go:92] pod "kube-apiserver-ha-170194-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:02.909443  624195 pod_ready.go:81] duration metric: took 7.798305ms for pod "kube-apiserver-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:02.909456  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:02.909524  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194
	I0520 13:30:02.909535  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.909545  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.909555  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.912613  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:02.913317  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:30:02.913333  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.913344  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.913349  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.916309  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:02.916815  624195 pod_ready.go:92] pod "kube-controller-manager-ha-170194" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:02.916838  624195 pod_ready.go:81] duration metric: took 7.36995ms for pod "kube-controller-manager-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:02.916850  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:02.950325  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194-m02
	I0520 13:30:02.950355  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:02.950367  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:02.950372  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:02.953590  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:03.150736  624195 request.go:629] Waited for 196.368698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:03.150799  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:03.150805  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:03.150812  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:03.150818  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:03.156970  624195 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 13:30:03.157575  624195 pod_ready.go:92] pod "kube-controller-manager-ha-170194-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:03.157602  624195 pod_ready.go:81] duration metric: took 240.743475ms for pod "kube-controller-manager-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:03.157618  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7ncvb" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:03.350912  624195 request.go:629] Waited for 193.189896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7ncvb
	I0520 13:30:03.350994  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7ncvb
	I0520 13:30:03.351001  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:03.351020  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:03.351025  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:03.357026  624195 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 13:30:03.550292  624195 request.go:629] Waited for 192.362807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:03.550393  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:03.550405  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:03.550414  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:03.550421  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:03.553901  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:03.554429  624195 pod_ready.go:92] pod "kube-proxy-7ncvb" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:03.554449  624195 pod_ready.go:81] duration metric: took 396.823504ms for pod "kube-proxy-7ncvb" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:03.554460  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qth8f" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:03.750598  624195 request.go:629] Waited for 196.037132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qth8f
	I0520 13:30:03.750703  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qth8f
	I0520 13:30:03.750715  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:03.750726  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:03.750744  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:03.754602  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:03.950718  624195 request.go:629] Waited for 195.384951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:30:03.950797  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:30:03.950804  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:03.950815  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:03.950822  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:03.953903  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:03.954503  624195 pod_ready.go:92] pod "kube-proxy-qth8f" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:03.954524  624195 pod_ready.go:81] duration metric: took 400.058159ms for pod "kube-proxy-qth8f" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:03.954534  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:04.150643  624195 request.go:629] Waited for 196.034483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194
	I0520 13:30:04.150730  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194
	I0520 13:30:04.150749  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:04.150778  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:04.150784  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:04.153734  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:04.350698  624195 request.go:629] Waited for 196.367794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:30:04.350777  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:30:04.350784  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:04.350795  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:04.350807  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:04.354298  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:04.354849  624195 pod_ready.go:92] pod "kube-scheduler-ha-170194" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:04.354871  624195 pod_ready.go:81] duration metric: took 400.328782ms for pod "kube-scheduler-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:04.354883  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:04.550951  624195 request.go:629] Waited for 195.969359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194-m02
	I0520 13:30:04.551018  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194-m02
	I0520 13:30:04.551023  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:04.551034  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:04.551039  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:04.554386  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:04.750250  624195 request.go:629] Waited for 195.258698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:04.750314  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:30:04.750319  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:04.750326  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:04.750332  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:04.753603  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:04.754104  624195 pod_ready.go:92] pod "kube-scheduler-ha-170194-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:04.754127  624195 pod_ready.go:81] duration metric: took 399.235803ms for pod "kube-scheduler-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:04.754142  624195 pod_ready.go:38] duration metric: took 8.399714217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:30:04.754162  624195 api_server.go:52] waiting for apiserver process to appear ...
	I0520 13:30:04.754227  624195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:30:04.768939  624195 api_server.go:72] duration metric: took 15.738685815s to wait for apiserver process to appear ...
	I0520 13:30:04.768965  624195 api_server.go:88] waiting for apiserver healthz status ...
	I0520 13:30:04.768989  624195 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0520 13:30:04.775044  624195 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0520 13:30:04.775111  624195 round_trippers.go:463] GET https://192.168.39.92:8443/version
	I0520 13:30:04.775116  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:04.775125  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:04.775130  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:04.776778  624195 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 13:30:04.777087  624195 api_server.go:141] control plane version: v1.30.1
	I0520 13:30:04.777108  624195 api_server.go:131] duration metric: took 8.137141ms to wait for apiserver health ...
	I0520 13:30:04.777116  624195 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 13:30:04.949976  624195 request.go:629] Waited for 172.782765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:30:04.950054  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:30:04.950059  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:04.950067  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:04.950073  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:04.955312  624195 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 13:30:04.959435  624195 system_pods.go:59] 17 kube-system pods found
	I0520 13:30:04.959467  624195 system_pods.go:61] "coredns-7db6d8ff4d-s28r6" [b161e3ee-7969-4861-9778-3bc34356d792] Running
	I0520 13:30:04.959472  624195 system_pods.go:61] "coredns-7db6d8ff4d-vk78q" [334eb0ed-c771-4840-92d1-04c1b9ec5179] Running
	I0520 13:30:04.959476  624195 system_pods.go:61] "etcd-ha-170194" [ded07afc-66d3-496a-acd2-f802c27e5ea9] Running
	I0520 13:30:04.959480  624195 system_pods.go:61] "etcd-ha-170194-m02" [2c007933-6349-40d1-ac8c-05e5cc30d126] Running
	I0520 13:30:04.959485  624195 system_pods.go:61] "kindnet-5mg44" [5d873b63-664b-431b-8d06-3d5d69e3f6a5] Running
	I0520 13:30:04.959489  624195 system_pods.go:61] "kindnet-cmd8x" [44545bda-e29b-44d6-97f7-45290fda6e37] Running
	I0520 13:30:04.959493  624195 system_pods.go:61] "kube-apiserver-ha-170194" [2700e177-dce9-4317-85a7-b067ebeebb90] Running
	I0520 13:30:04.959496  624195 system_pods.go:61] "kube-apiserver-ha-170194-m02" [78430b5f-64e7-413a-afd0-6eac06f49c9e] Running
	I0520 13:30:04.959499  624195 system_pods.go:61] "kube-controller-manager-ha-170194" [c4fc6771-8fe2-4ba4-96f6-f1a81c064745] Running
	I0520 13:30:04.959503  624195 system_pods.go:61] "kube-controller-manager-ha-170194-m02" [6104df8e-90a8-4a4f-8099-b8ae8bb5c37d] Running
	I0520 13:30:04.959506  624195 system_pods.go:61] "kube-proxy-7ncvb" [647cc75f-67fb-4921-812e-73f41fca3289] Running
	I0520 13:30:04.959509  624195 system_pods.go:61] "kube-proxy-qth8f" [fc43fd92-69c8-419e-9f78-0b5d489b561a] Running
	I0520 13:30:04.959512  624195 system_pods.go:61] "kube-scheduler-ha-170194" [b4a2557e-86ae-4779-89a3-d0ab45de3ff4] Running
	I0520 13:30:04.959515  624195 system_pods.go:61] "kube-scheduler-ha-170194-m02" [6fb368ac-5c00-4114-b825-72db739a812b] Running
	I0520 13:30:04.959518  624195 system_pods.go:61] "kube-vip-ha-170194" [aed1bd37-f323-4950-b9d0-43e5e2eef5b7] Running
	I0520 13:30:04.959521  624195 system_pods.go:61] "kube-vip-ha-170194-m02" [8dd92207-13d4-4a34-83b9-73d84a185230] Running
	I0520 13:30:04.959525  624195 system_pods.go:61] "storage-provisioner" [ce0e094e-1f65-407a-9fc0-a3c55f9de344] Running
	I0520 13:30:04.959534  624195 system_pods.go:74] duration metric: took 182.412172ms to wait for pod list to return data ...
	I0520 13:30:04.959545  624195 default_sa.go:34] waiting for default service account to be created ...
	I0520 13:30:05.149981  624195 request.go:629] Waited for 190.331678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/default/serviceaccounts
	I0520 13:30:05.150064  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/default/serviceaccounts
	I0520 13:30:05.150072  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:05.150086  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:05.150098  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:05.152988  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:30:05.153328  624195 default_sa.go:45] found service account: "default"
	I0520 13:30:05.153350  624195 default_sa.go:55] duration metric: took 193.798364ms for default service account to be created ...
	I0520 13:30:05.153362  624195 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 13:30:05.350823  624195 request.go:629] Waited for 197.348911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:30:05.350904  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:30:05.350912  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:05.350920  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:05.350933  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:05.356207  624195 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 13:30:05.360566  624195 system_pods.go:86] 17 kube-system pods found
	I0520 13:30:05.360598  624195 system_pods.go:89] "coredns-7db6d8ff4d-s28r6" [b161e3ee-7969-4861-9778-3bc34356d792] Running
	I0520 13:30:05.360603  624195 system_pods.go:89] "coredns-7db6d8ff4d-vk78q" [334eb0ed-c771-4840-92d1-04c1b9ec5179] Running
	I0520 13:30:05.360607  624195 system_pods.go:89] "etcd-ha-170194" [ded07afc-66d3-496a-acd2-f802c27e5ea9] Running
	I0520 13:30:05.360611  624195 system_pods.go:89] "etcd-ha-170194-m02" [2c007933-6349-40d1-ac8c-05e5cc30d126] Running
	I0520 13:30:05.360616  624195 system_pods.go:89] "kindnet-5mg44" [5d873b63-664b-431b-8d06-3d5d69e3f6a5] Running
	I0520 13:30:05.360620  624195 system_pods.go:89] "kindnet-cmd8x" [44545bda-e29b-44d6-97f7-45290fda6e37] Running
	I0520 13:30:05.360624  624195 system_pods.go:89] "kube-apiserver-ha-170194" [2700e177-dce9-4317-85a7-b067ebeebb90] Running
	I0520 13:30:05.360628  624195 system_pods.go:89] "kube-apiserver-ha-170194-m02" [78430b5f-64e7-413a-afd0-6eac06f49c9e] Running
	I0520 13:30:05.360633  624195 system_pods.go:89] "kube-controller-manager-ha-170194" [c4fc6771-8fe2-4ba4-96f6-f1a81c064745] Running
	I0520 13:30:05.360638  624195 system_pods.go:89] "kube-controller-manager-ha-170194-m02" [6104df8e-90a8-4a4f-8099-b8ae8bb5c37d] Running
	I0520 13:30:05.360647  624195 system_pods.go:89] "kube-proxy-7ncvb" [647cc75f-67fb-4921-812e-73f41fca3289] Running
	I0520 13:30:05.360656  624195 system_pods.go:89] "kube-proxy-qth8f" [fc43fd92-69c8-419e-9f78-0b5d489b561a] Running
	I0520 13:30:05.360665  624195 system_pods.go:89] "kube-scheduler-ha-170194" [b4a2557e-86ae-4779-89a3-d0ab45de3ff4] Running
	I0520 13:30:05.360670  624195 system_pods.go:89] "kube-scheduler-ha-170194-m02" [6fb368ac-5c00-4114-b825-72db739a812b] Running
	I0520 13:30:05.360677  624195 system_pods.go:89] "kube-vip-ha-170194" [aed1bd37-f323-4950-b9d0-43e5e2eef5b7] Running
	I0520 13:30:05.360682  624195 system_pods.go:89] "kube-vip-ha-170194-m02" [8dd92207-13d4-4a34-83b9-73d84a185230] Running
	I0520 13:30:05.360689  624195 system_pods.go:89] "storage-provisioner" [ce0e094e-1f65-407a-9fc0-a3c55f9de344] Running
	I0520 13:30:05.360694  624195 system_pods.go:126] duration metric: took 207.327087ms to wait for k8s-apps to be running ...
	I0520 13:30:05.360705  624195 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 13:30:05.360766  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:30:05.375858  624195 system_svc.go:56] duration metric: took 15.138836ms WaitForService to wait for kubelet
	I0520 13:30:05.375893  624195 kubeadm.go:576] duration metric: took 16.345645729s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 13:30:05.375920  624195 node_conditions.go:102] verifying NodePressure condition ...
	I0520 13:30:05.550424  624195 request.go:629] Waited for 174.382572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes
	I0520 13:30:05.550499  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes
	I0520 13:30:05.550504  624195 round_trippers.go:469] Request Headers:
	I0520 13:30:05.550512  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:30:05.550517  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:30:05.554088  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:30:05.555225  624195 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 13:30:05.555258  624195 node_conditions.go:123] node cpu capacity is 2
	I0520 13:30:05.555296  624195 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 13:30:05.555305  624195 node_conditions.go:123] node cpu capacity is 2
	I0520 13:30:05.555312  624195 node_conditions.go:105] duration metric: took 179.386895ms to run NodePressure ...
	I0520 13:30:05.555329  624195 start.go:240] waiting for startup goroutines ...
	I0520 13:30:05.555366  624195 start.go:254] writing updated cluster config ...
	I0520 13:30:05.558538  624195 out.go:177] 
	I0520 13:30:05.561741  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:30:05.561844  624195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:30:05.564790  624195 out.go:177] * Starting "ha-170194-m03" control-plane node in "ha-170194" cluster
	I0520 13:30:05.566924  624195 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:30:05.566982  624195 cache.go:56] Caching tarball of preloaded images
	I0520 13:30:05.567157  624195 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:30:05.567172  624195 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 13:30:05.567277  624195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:30:05.567499  624195 start.go:360] acquireMachinesLock for ha-170194-m03: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:30:05.567549  624195 start.go:364] duration metric: took 28.384µs to acquireMachinesLock for "ha-170194-m03"
	I0520 13:30:05.567564  624195 start.go:93] Provisioning new machine with config: &{Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:30:05.567660  624195 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0520 13:30:05.570348  624195 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 13:30:05.570457  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:30:05.570505  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:30:05.586606  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35957
	I0520 13:30:05.587084  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:30:05.587595  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:30:05.587619  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:30:05.587936  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:30:05.588156  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetMachineName
	I0520 13:30:05.588293  624195 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:30:05.588466  624195 start.go:159] libmachine.API.Create for "ha-170194" (driver="kvm2")
	I0520 13:30:05.588493  624195 client.go:168] LocalClient.Create starting
	I0520 13:30:05.588527  624195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem
	I0520 13:30:05.588563  624195 main.go:141] libmachine: Decoding PEM data...
	I0520 13:30:05.588579  624195 main.go:141] libmachine: Parsing certificate...
	I0520 13:30:05.588628  624195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem
	I0520 13:30:05.588647  624195 main.go:141] libmachine: Decoding PEM data...
	I0520 13:30:05.588659  624195 main.go:141] libmachine: Parsing certificate...
	I0520 13:30:05.588675  624195 main.go:141] libmachine: Running pre-create checks...
	I0520 13:30:05.588683  624195 main.go:141] libmachine: (ha-170194-m03) Calling .PreCreateCheck
	I0520 13:30:05.588822  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetConfigRaw
	I0520 13:30:05.589264  624195 main.go:141] libmachine: Creating machine...
	I0520 13:30:05.589282  624195 main.go:141] libmachine: (ha-170194-m03) Calling .Create
	I0520 13:30:05.589408  624195 main.go:141] libmachine: (ha-170194-m03) Creating KVM machine...
	I0520 13:30:05.590790  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found existing default KVM network
	I0520 13:30:05.590939  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found existing private KVM network mk-ha-170194
	I0520 13:30:05.591089  624195 main.go:141] libmachine: (ha-170194-m03) Setting up store path in /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03 ...
	I0520 13:30:05.591115  624195 main.go:141] libmachine: (ha-170194-m03) Building disk image from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 13:30:05.591169  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:05.591067  625000 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:30:05.591288  624195 main.go:141] libmachine: (ha-170194-m03) Downloading /home/jenkins/minikube-integration/18929-602525/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 13:30:05.855717  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:05.855568  625000 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa...
	I0520 13:30:05.951723  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:05.951593  625000 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/ha-170194-m03.rawdisk...
	I0520 13:30:05.951759  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Writing magic tar header
	I0520 13:30:05.951800  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Writing SSH key tar header
	I0520 13:30:05.951838  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:05.951730  625000 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03 ...
	I0520 13:30:05.951862  624195 main.go:141] libmachine: (ha-170194-m03) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03 (perms=drwx------)
	I0520 13:30:05.951879  624195 main.go:141] libmachine: (ha-170194-m03) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines (perms=drwxr-xr-x)
	I0520 13:30:05.951900  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03
	I0520 13:30:05.951915  624195 main.go:141] libmachine: (ha-170194-m03) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube (perms=drwxr-xr-x)
	I0520 13:30:05.951929  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines
	I0520 13:30:05.951944  624195 main.go:141] libmachine: (ha-170194-m03) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525 (perms=drwxrwxr-x)
	I0520 13:30:05.951960  624195 main.go:141] libmachine: (ha-170194-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 13:30:05.951979  624195 main.go:141] libmachine: (ha-170194-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 13:30:05.951993  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:30:05.952008  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525
	I0520 13:30:05.952022  624195 main.go:141] libmachine: (ha-170194-m03) Creating domain...
	I0520 13:30:05.952031  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 13:30:05.952047  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Checking permissions on dir: /home/jenkins
	I0520 13:30:05.952060  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Checking permissions on dir: /home
	I0520 13:30:05.952072  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Skipping /home - not owner
	I0520 13:30:05.953472  624195 main.go:141] libmachine: (ha-170194-m03) define libvirt domain using xml: 
	I0520 13:30:05.953498  624195 main.go:141] libmachine: (ha-170194-m03) <domain type='kvm'>
	I0520 13:30:05.953504  624195 main.go:141] libmachine: (ha-170194-m03)   <name>ha-170194-m03</name>
	I0520 13:30:05.953511  624195 main.go:141] libmachine: (ha-170194-m03)   <memory unit='MiB'>2200</memory>
	I0520 13:30:05.953528  624195 main.go:141] libmachine: (ha-170194-m03)   <vcpu>2</vcpu>
	I0520 13:30:05.953544  624195 main.go:141] libmachine: (ha-170194-m03)   <features>
	I0520 13:30:05.953550  624195 main.go:141] libmachine: (ha-170194-m03)     <acpi/>
	I0520 13:30:05.953560  624195 main.go:141] libmachine: (ha-170194-m03)     <apic/>
	I0520 13:30:05.953566  624195 main.go:141] libmachine: (ha-170194-m03)     <pae/>
	I0520 13:30:05.953572  624195 main.go:141] libmachine: (ha-170194-m03)     
	I0520 13:30:05.953578  624195 main.go:141] libmachine: (ha-170194-m03)   </features>
	I0520 13:30:05.953584  624195 main.go:141] libmachine: (ha-170194-m03)   <cpu mode='host-passthrough'>
	I0520 13:30:05.953589  624195 main.go:141] libmachine: (ha-170194-m03)   
	I0520 13:30:05.953596  624195 main.go:141] libmachine: (ha-170194-m03)   </cpu>
	I0520 13:30:05.953603  624195 main.go:141] libmachine: (ha-170194-m03)   <os>
	I0520 13:30:05.953613  624195 main.go:141] libmachine: (ha-170194-m03)     <type>hvm</type>
	I0520 13:30:05.953626  624195 main.go:141] libmachine: (ha-170194-m03)     <boot dev='cdrom'/>
	I0520 13:30:05.953633  624195 main.go:141] libmachine: (ha-170194-m03)     <boot dev='hd'/>
	I0520 13:30:05.953645  624195 main.go:141] libmachine: (ha-170194-m03)     <bootmenu enable='no'/>
	I0520 13:30:05.953654  624195 main.go:141] libmachine: (ha-170194-m03)   </os>
	I0520 13:30:05.953659  624195 main.go:141] libmachine: (ha-170194-m03)   <devices>
	I0520 13:30:05.953666  624195 main.go:141] libmachine: (ha-170194-m03)     <disk type='file' device='cdrom'>
	I0520 13:30:05.953675  624195 main.go:141] libmachine: (ha-170194-m03)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/boot2docker.iso'/>
	I0520 13:30:05.953683  624195 main.go:141] libmachine: (ha-170194-m03)       <target dev='hdc' bus='scsi'/>
	I0520 13:30:05.953689  624195 main.go:141] libmachine: (ha-170194-m03)       <readonly/>
	I0520 13:30:05.953695  624195 main.go:141] libmachine: (ha-170194-m03)     </disk>
	I0520 13:30:05.953728  624195 main.go:141] libmachine: (ha-170194-m03)     <disk type='file' device='disk'>
	I0520 13:30:05.953755  624195 main.go:141] libmachine: (ha-170194-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 13:30:05.953771  624195 main.go:141] libmachine: (ha-170194-m03)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/ha-170194-m03.rawdisk'/>
	I0520 13:30:05.953782  624195 main.go:141] libmachine: (ha-170194-m03)       <target dev='hda' bus='virtio'/>
	I0520 13:30:05.953791  624195 main.go:141] libmachine: (ha-170194-m03)     </disk>
	I0520 13:30:05.953799  624195 main.go:141] libmachine: (ha-170194-m03)     <interface type='network'>
	I0520 13:30:05.953810  624195 main.go:141] libmachine: (ha-170194-m03)       <source network='mk-ha-170194'/>
	I0520 13:30:05.953818  624195 main.go:141] libmachine: (ha-170194-m03)       <model type='virtio'/>
	I0520 13:30:05.953841  624195 main.go:141] libmachine: (ha-170194-m03)     </interface>
	I0520 13:30:05.953873  624195 main.go:141] libmachine: (ha-170194-m03)     <interface type='network'>
	I0520 13:30:05.953892  624195 main.go:141] libmachine: (ha-170194-m03)       <source network='default'/>
	I0520 13:30:05.953904  624195 main.go:141] libmachine: (ha-170194-m03)       <model type='virtio'/>
	I0520 13:30:05.953915  624195 main.go:141] libmachine: (ha-170194-m03)     </interface>
	I0520 13:30:05.953925  624195 main.go:141] libmachine: (ha-170194-m03)     <serial type='pty'>
	I0520 13:30:05.953934  624195 main.go:141] libmachine: (ha-170194-m03)       <target port='0'/>
	I0520 13:30:05.953943  624195 main.go:141] libmachine: (ha-170194-m03)     </serial>
	I0520 13:30:05.953949  624195 main.go:141] libmachine: (ha-170194-m03)     <console type='pty'>
	I0520 13:30:05.953963  624195 main.go:141] libmachine: (ha-170194-m03)       <target type='serial' port='0'/>
	I0520 13:30:05.953986  624195 main.go:141] libmachine: (ha-170194-m03)     </console>
	I0520 13:30:05.954007  624195 main.go:141] libmachine: (ha-170194-m03)     <rng model='virtio'>
	I0520 13:30:05.954023  624195 main.go:141] libmachine: (ha-170194-m03)       <backend model='random'>/dev/random</backend>
	I0520 13:30:05.954034  624195 main.go:141] libmachine: (ha-170194-m03)     </rng>
	I0520 13:30:05.954044  624195 main.go:141] libmachine: (ha-170194-m03)     
	I0520 13:30:05.954051  624195 main.go:141] libmachine: (ha-170194-m03)     
	I0520 13:30:05.954062  624195 main.go:141] libmachine: (ha-170194-m03)   </devices>
	I0520 13:30:05.954070  624195 main.go:141] libmachine: (ha-170194-m03) </domain>
	I0520 13:30:05.954078  624195 main.go:141] libmachine: (ha-170194-m03) 
	I0520 13:30:05.962043  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:5d:3c:46 in network default
	I0520 13:30:05.962773  624195 main.go:141] libmachine: (ha-170194-m03) Ensuring networks are active...
	I0520 13:30:05.962808  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:05.963634  624195 main.go:141] libmachine: (ha-170194-m03) Ensuring network default is active
	I0520 13:30:05.963959  624195 main.go:141] libmachine: (ha-170194-m03) Ensuring network mk-ha-170194 is active
	I0520 13:30:05.964293  624195 main.go:141] libmachine: (ha-170194-m03) Getting domain xml...
	I0520 13:30:05.965021  624195 main.go:141] libmachine: (ha-170194-m03) Creating domain...
	I0520 13:30:07.255402  624195 main.go:141] libmachine: (ha-170194-m03) Waiting to get IP...
	I0520 13:30:07.256427  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:07.256890  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:07.256945  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:07.256883  625000 retry.go:31] will retry after 275.904132ms: waiting for machine to come up
	I0520 13:30:07.534625  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:07.535196  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:07.535228  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:07.535150  625000 retry.go:31] will retry after 354.965705ms: waiting for machine to come up
	I0520 13:30:07.891830  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:07.892379  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:07.892418  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:07.892313  625000 retry.go:31] will retry after 448.861988ms: waiting for machine to come up
	I0520 13:30:08.342904  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:08.343449  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:08.343481  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:08.343408  625000 retry.go:31] will retry after 497.367289ms: waiting for machine to come up
	I0520 13:30:08.842056  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:08.842470  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:08.842499  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:08.842428  625000 retry.go:31] will retry after 747.853284ms: waiting for machine to come up
	I0520 13:30:09.591931  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:09.592481  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:09.592515  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:09.592408  625000 retry.go:31] will retry after 600.738064ms: waiting for machine to come up
	I0520 13:30:10.195213  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:10.195595  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:10.195622  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:10.195553  625000 retry.go:31] will retry after 1.013177893s: waiting for machine to come up
	I0520 13:30:11.210907  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:11.211446  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:11.211481  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:11.211368  625000 retry.go:31] will retry after 1.118159499s: waiting for machine to come up
	I0520 13:30:12.330917  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:12.331414  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:12.331438  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:12.331362  625000 retry.go:31] will retry after 1.645480289s: waiting for machine to come up
	I0520 13:30:13.979298  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:13.979838  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:13.979897  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:13.979810  625000 retry.go:31] will retry after 2.237022659s: waiting for machine to come up
	I0520 13:30:16.218340  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:16.218879  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:16.218910  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:16.218826  625000 retry.go:31] will retry after 2.212494575s: waiting for machine to come up
	I0520 13:30:18.434192  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:18.434650  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:18.434679  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:18.434600  625000 retry.go:31] will retry after 3.191824667s: waiting for machine to come up
	I0520 13:30:21.628441  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:21.628825  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:21.628849  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:21.628788  625000 retry.go:31] will retry after 2.775656421s: waiting for machine to come up
	I0520 13:30:24.406421  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:24.406849  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find current IP address of domain ha-170194-m03 in network mk-ha-170194
	I0520 13:30:24.406882  624195 main.go:141] libmachine: (ha-170194-m03) DBG | I0520 13:30:24.406800  625000 retry.go:31] will retry after 3.444701645s: waiting for machine to come up
	I0520 13:30:27.854117  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:27.854571  624195 main.go:141] libmachine: (ha-170194-m03) Found IP for machine: 192.168.39.3
	I0520 13:30:27.854593  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has current primary IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:27.854601  624195 main.go:141] libmachine: (ha-170194-m03) Reserving static IP address...
	I0520 13:30:27.854992  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find host DHCP lease matching {name: "ha-170194-m03", mac: "52:54:00:f7:7b:a7", ip: "192.168.39.3"} in network mk-ha-170194
	I0520 13:30:27.932359  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Getting to WaitForSSH function...
	I0520 13:30:27.932385  624195 main.go:141] libmachine: (ha-170194-m03) Reserved static IP address: 192.168.39.3
	I0520 13:30:27.932399  624195 main.go:141] libmachine: (ha-170194-m03) Waiting for SSH to be available...
	I0520 13:30:27.934878  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:27.935312  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194
	I0520 13:30:27.935346  624195 main.go:141] libmachine: (ha-170194-m03) DBG | unable to find defined IP address of network mk-ha-170194 interface with MAC address 52:54:00:f7:7b:a7
	I0520 13:30:27.935542  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Using SSH client type: external
	I0520 13:30:27.935566  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa (-rw-------)
	I0520 13:30:27.935608  624195 main.go:141] libmachine: (ha-170194-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 13:30:27.935629  624195 main.go:141] libmachine: (ha-170194-m03) DBG | About to run SSH command:
	I0520 13:30:27.935647  624195 main.go:141] libmachine: (ha-170194-m03) DBG | exit 0
	I0520 13:30:27.940409  624195 main.go:141] libmachine: (ha-170194-m03) DBG | SSH cmd err, output: exit status 255: 
	I0520 13:30:27.940438  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0520 13:30:27.940481  624195 main.go:141] libmachine: (ha-170194-m03) DBG | command : exit 0
	I0520 13:30:27.940512  624195 main.go:141] libmachine: (ha-170194-m03) DBG | err     : exit status 255
	I0520 13:30:27.940529  624195 main.go:141] libmachine: (ha-170194-m03) DBG | output  : 
	I0520 13:30:30.941487  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Getting to WaitForSSH function...
	I0520 13:30:30.944403  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:30.944860  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:30.944889  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:30.945064  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Using SSH client type: external
	I0520 13:30:30.945166  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa (-rw-------)
	I0520 13:30:30.945195  624195 main.go:141] libmachine: (ha-170194-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 13:30:30.945204  624195 main.go:141] libmachine: (ha-170194-m03) DBG | About to run SSH command:
	I0520 13:30:30.945264  624195 main.go:141] libmachine: (ha-170194-m03) DBG | exit 0
	I0520 13:30:31.069381  624195 main.go:141] libmachine: (ha-170194-m03) DBG | SSH cmd err, output: <nil>: 
	I0520 13:30:31.069705  624195 main.go:141] libmachine: (ha-170194-m03) KVM machine creation complete!
	I0520 13:30:31.070179  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetConfigRaw
	I0520 13:30:31.070838  624195 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:30:31.071068  624195 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:30:31.071215  624195 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 13:30:31.071237  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetState
	I0520 13:30:31.072478  624195 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 13:30:31.072498  624195 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 13:30:31.072504  624195 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 13:30:31.072510  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:31.075064  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.075496  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:31.075528  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.075719  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:31.075920  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.076108  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.076251  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:31.076493  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:30:31.076760  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0520 13:30:31.076775  624195 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 13:30:31.180572  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:30:31.180602  624195 main.go:141] libmachine: Detecting the provisioner...
	I0520 13:30:31.180613  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:31.183547  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.183912  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:31.183935  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.184140  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:31.184355  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.184491  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.184677  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:31.184820  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:30:31.185060  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0520 13:30:31.185081  624195 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 13:30:31.285778  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 13:30:31.285845  624195 main.go:141] libmachine: found compatible host: buildroot
	I0520 13:30:31.285854  624195 main.go:141] libmachine: Provisioning with buildroot...
	I0520 13:30:31.285865  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetMachineName
	I0520 13:30:31.286164  624195 buildroot.go:166] provisioning hostname "ha-170194-m03"
	I0520 13:30:31.286194  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetMachineName
	I0520 13:30:31.286370  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:31.288853  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.289225  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:31.289276  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.289382  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:31.289567  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.289765  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.289918  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:31.290167  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:30:31.290341  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0520 13:30:31.290354  624195 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-170194-m03 && echo "ha-170194-m03" | sudo tee /etc/hostname
	I0520 13:30:31.407000  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-170194-m03
	
	I0520 13:30:31.407034  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:31.410020  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.410487  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:31.410513  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.410772  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:31.411020  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.411193  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.411372  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:31.411570  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:30:31.411761  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0520 13:30:31.411784  624195 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-170194-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-170194-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-170194-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 13:30:31.521414  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:30:31.521456  624195 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 13:30:31.521476  624195 buildroot.go:174] setting up certificates
	I0520 13:30:31.521489  624195 provision.go:84] configureAuth start
	I0520 13:30:31.521500  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetMachineName
	I0520 13:30:31.521821  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:30:31.524618  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.525057  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:31.525088  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.525268  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:31.527520  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.527911  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:31.527937  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.528156  624195 provision.go:143] copyHostCerts
	I0520 13:30:31.528194  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:30:31.528231  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 13:30:31.528240  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:30:31.528303  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 13:30:31.528374  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:30:31.528408  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 13:30:31.528421  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:30:31.528458  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 13:30:31.528526  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:30:31.528548  624195 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 13:30:31.528554  624195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:30:31.528588  624195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 13:30:31.528657  624195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.ha-170194-m03 san=[127.0.0.1 192.168.39.3 ha-170194-m03 localhost minikube]
	I0520 13:30:31.628385  624195 provision.go:177] copyRemoteCerts
	I0520 13:30:31.628464  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 13:30:31.628502  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:31.631324  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.631739  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:31.631770  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.631960  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:31.632184  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.632349  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:31.632518  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:30:31.721337  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 13:30:31.721432  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 13:30:31.743764  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 13:30:31.743859  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 13:30:31.767363  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 13:30:31.767462  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 13:30:31.795379  624195 provision.go:87] duration metric: took 273.870594ms to configureAuth
	I0520 13:30:31.795419  624195 buildroot.go:189] setting minikube options for container-runtime
	I0520 13:30:31.795665  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:30:31.795746  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:31.798495  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.798948  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:31.798994  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:31.799161  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:31.799350  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.799496  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:31.799675  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:31.799897  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:30:31.800090  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0520 13:30:31.800113  624195 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 13:30:32.073684  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 13:30:32.073714  624195 main.go:141] libmachine: Checking connection to Docker...
	I0520 13:30:32.073723  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetURL
	I0520 13:30:32.075156  624195 main.go:141] libmachine: (ha-170194-m03) DBG | Using libvirt version 6000000
	I0520 13:30:32.077610  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.077972  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:32.078001  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.078234  624195 main.go:141] libmachine: Docker is up and running!
	I0520 13:30:32.078252  624195 main.go:141] libmachine: Reticulating splines...
	I0520 13:30:32.078261  624195 client.go:171] duration metric: took 26.489757298s to LocalClient.Create
	I0520 13:30:32.078288  624195 start.go:167] duration metric: took 26.489823409s to libmachine.API.Create "ha-170194"
	I0520 13:30:32.078298  624195 start.go:293] postStartSetup for "ha-170194-m03" (driver="kvm2")
	I0520 13:30:32.078309  624195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 13:30:32.078331  624195 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:30:32.078592  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 13:30:32.078616  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:32.081048  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.081473  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:32.081494  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.081663  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:32.081879  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:32.082086  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:32.082265  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:30:32.163555  624195 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 13:30:32.168040  624195 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 13:30:32.168079  624195 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 13:30:32.168163  624195 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 13:30:32.168278  624195 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 13:30:32.168292  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /etc/ssl/certs/6098672.pem
	I0520 13:30:32.168411  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 13:30:32.177451  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:30:32.200519  624195 start.go:296] duration metric: took 122.205083ms for postStartSetup
	I0520 13:30:32.200585  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetConfigRaw
	I0520 13:30:32.201271  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:30:32.204064  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.204529  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:32.204561  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.204794  624195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:30:32.205000  624195 start.go:128] duration metric: took 26.637328376s to createHost
	I0520 13:30:32.205036  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:32.207628  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.208082  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:32.208111  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.208299  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:32.208496  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:32.208664  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:32.208798  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:32.208963  624195 main.go:141] libmachine: Using SSH client type: native
	I0520 13:30:32.209157  624195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0520 13:30:32.209166  624195 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 13:30:32.313842  624195 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716211832.294893931
	
	I0520 13:30:32.313871  624195 fix.go:216] guest clock: 1716211832.294893931
	I0520 13:30:32.313881  624195 fix.go:229] Guest: 2024-05-20 13:30:32.294893931 +0000 UTC Remote: 2024-05-20 13:30:32.20501386 +0000 UTC m=+157.451878754 (delta=89.880071ms)
	I0520 13:30:32.313910  624195 fix.go:200] guest clock delta is within tolerance: 89.880071ms
	I0520 13:30:32.313917  624195 start.go:83] releasing machines lock for "ha-170194-m03", held for 26.746361199s
	I0520 13:30:32.313941  624195 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:30:32.314262  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:30:32.317143  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.317565  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:32.317592  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.320794  624195 out.go:177] * Found network options:
	I0520 13:30:32.323012  624195 out.go:177]   - NO_PROXY=192.168.39.92,192.168.39.155
	W0520 13:30:32.325151  624195 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 13:30:32.325178  624195 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 13:30:32.325195  624195 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:30:32.325868  624195 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:30:32.326135  624195 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:30:32.326282  624195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 13:30:32.326330  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	W0520 13:30:32.326450  624195 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 13:30:32.326478  624195 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 13:30:32.326551  624195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 13:30:32.326578  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:30:32.329559  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.329733  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.329971  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:32.329999  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.330027  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:32.330046  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:32.330200  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:32.330339  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:30:32.330447  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:32.330547  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:30:32.330622  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:32.330703  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:30:32.330764  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:30:32.330850  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:30:32.569233  624195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 13:30:32.574887  624195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 13:30:32.574990  624195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 13:30:32.590259  624195 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 13:30:32.590285  624195 start.go:494] detecting cgroup driver to use...
	I0520 13:30:32.590371  624195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 13:30:32.607145  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 13:30:32.620710  624195 docker.go:217] disabling cri-docker service (if available) ...
	I0520 13:30:32.620766  624195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 13:30:32.636122  624195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 13:30:32.649419  624195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 13:30:32.767377  624195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 13:30:32.904453  624195 docker.go:233] disabling docker service ...
	I0520 13:30:32.904532  624195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 13:30:32.919111  624195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 13:30:32.934079  624195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 13:30:33.065432  624195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 13:30:33.208470  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 13:30:33.221756  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 13:30:33.239327  624195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 13:30:33.239396  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:33.249566  624195 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 13:30:33.249628  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:33.259729  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:33.269936  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:33.280434  624195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 13:30:33.291428  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:33.301588  624195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:33.319083  624195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:33.329307  624195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 13:30:33.338655  624195 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 13:30:33.338709  624195 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 13:30:33.352806  624195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 13:30:33.362484  624195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:30:33.474132  624195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 13:30:33.604602  624195 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 13:30:33.604688  624195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 13:30:33.609703  624195 start.go:562] Will wait 60s for crictl version
	I0520 13:30:33.609778  624195 ssh_runner.go:195] Run: which crictl
	I0520 13:30:33.614003  624195 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 13:30:33.657808  624195 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 13:30:33.657897  624195 ssh_runner.go:195] Run: crio --version
	I0520 13:30:33.685063  624195 ssh_runner.go:195] Run: crio --version
	I0520 13:30:33.714493  624195 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 13:30:33.716563  624195 out.go:177]   - env NO_PROXY=192.168.39.92
	I0520 13:30:33.718655  624195 out.go:177]   - env NO_PROXY=192.168.39.92,192.168.39.155
	I0520 13:30:33.720506  624195 main.go:141] libmachine: (ha-170194-m03) Calling .GetIP
	I0520 13:30:33.723281  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:33.723726  624195 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:30:33.723759  624195 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:30:33.723993  624195 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 13:30:33.728492  624195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:30:33.740260  624195 mustload.go:65] Loading cluster: ha-170194
	I0520 13:30:33.740552  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:30:33.740896  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:30:33.740940  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:30:33.756043  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41523
	I0520 13:30:33.756479  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:30:33.756976  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:30:33.756998  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:30:33.757399  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:30:33.757626  624195 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:30:33.759265  624195 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:30:33.759544  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:30:33.759589  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:30:33.774705  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40231
	I0520 13:30:33.775152  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:30:33.775634  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:30:33.775657  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:30:33.775953  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:30:33.776165  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:30:33.776372  624195 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194 for IP: 192.168.39.3
	I0520 13:30:33.776384  624195 certs.go:194] generating shared ca certs ...
	I0520 13:30:33.776404  624195 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:30:33.776535  624195 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 13:30:33.776588  624195 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 13:30:33.776602  624195 certs.go:256] generating profile certs ...
	I0520 13:30:33.776691  624195 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key
	I0520 13:30:33.776723  624195 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.c45135b2
	I0520 13:30:33.776747  624195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.c45135b2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.92 192.168.39.155 192.168.39.3 192.168.39.254]
	I0520 13:30:34.113198  624195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.c45135b2 ...
	I0520 13:30:34.113235  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.c45135b2: {Name:mkf5e6820326fafcde9d57b89600ed56eebf0206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:30:34.113459  624195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.c45135b2 ...
	I0520 13:30:34.113479  624195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.c45135b2: {Name:mk508eb53b19d6075bb0e8a9ef600d6014e40055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:30:34.113580  624195 certs.go:381] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.c45135b2 -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt
	I0520 13:30:34.113736  624195 certs.go:385] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.c45135b2 -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key
	I0520 13:30:34.113902  624195 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key
	I0520 13:30:34.113923  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 13:30:34.113973  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 13:30:34.113996  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 13:30:34.114014  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 13:30:34.114034  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 13:30:34.114053  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 13:30:34.114072  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 13:30:34.114089  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 13:30:34.114155  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem (1338 bytes)
	W0520 13:30:34.114196  624195 certs.go:480] ignoring /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867_empty.pem, impossibly tiny 0 bytes
	I0520 13:30:34.114219  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 13:30:34.114266  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 13:30:34.114297  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:30:34.114335  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 13:30:34.114399  624195 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:30:34.114440  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:30:34.114462  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem -> /usr/share/ca-certificates/609867.pem
	I0520 13:30:34.114479  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /usr/share/ca-certificates/6098672.pem
	I0520 13:30:34.114525  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:30:34.117904  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:30:34.118360  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:30:34.118383  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:30:34.118556  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:30:34.118815  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:30:34.119010  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:30:34.119191  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:30:34.189734  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0520 13:30:34.194562  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0520 13:30:34.213703  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0520 13:30:34.217803  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0520 13:30:34.229298  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0520 13:30:34.233740  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0520 13:30:34.251363  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0520 13:30:34.259057  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0520 13:30:34.272535  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0520 13:30:34.276778  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0520 13:30:34.287992  624195 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0520 13:30:34.291840  624195 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0520 13:30:34.306446  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:30:34.330696  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:30:34.352546  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:30:34.374486  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 13:30:34.395715  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0520 13:30:34.417427  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 13:30:34.440656  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:30:34.463384  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 13:30:34.486723  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:30:34.509426  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem --> /usr/share/ca-certificates/609867.pem (1338 bytes)
	I0520 13:30:34.531288  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /usr/share/ca-certificates/6098672.pem (1708 bytes)
	I0520 13:30:34.553354  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0520 13:30:34.569607  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0520 13:30:34.585507  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0520 13:30:34.600625  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0520 13:30:34.617392  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0520 13:30:34.634444  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0520 13:30:34.651286  624195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0520 13:30:34.667991  624195 ssh_runner.go:195] Run: openssl version
	I0520 13:30:34.673650  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:30:34.684113  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:30:34.688566  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:30:34.688616  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:30:34.694066  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:30:34.704778  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/609867.pem && ln -fs /usr/share/ca-certificates/609867.pem /etc/ssl/certs/609867.pem"
	I0520 13:30:34.715397  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/609867.pem
	I0520 13:30:34.719759  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 13:30:34.719848  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/609867.pem
	I0520 13:30:34.726249  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/609867.pem /etc/ssl/certs/51391683.0"
	I0520 13:30:34.737957  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6098672.pem && ln -fs /usr/share/ca-certificates/6098672.pem /etc/ssl/certs/6098672.pem"
	I0520 13:30:34.749329  624195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6098672.pem
	I0520 13:30:34.753506  624195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 13:30:34.753675  624195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6098672.pem
	I0520 13:30:34.759062  624195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6098672.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:30:34.770545  624195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:30:34.774401  624195 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 13:30:34.774457  624195 kubeadm.go:928] updating node {m03 192.168.39.3 8443 v1.30.1 crio true true} ...
	I0520 13:30:34.774558  624195 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-170194-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:30:34.774589  624195 kube-vip.go:115] generating kube-vip config ...
	I0520 13:30:34.774630  624195 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 13:30:34.789410  624195 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 13:30:34.791335  624195 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 13:30:34.791392  624195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 13:30:34.801200  624195 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 13:30:34.801287  624195 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 13:30:34.810988  624195 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0520 13:30:34.810999  624195 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0520 13:30:34.811015  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 13:30:34.810996  624195 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0520 13:30:34.811054  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:30:34.811064  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 13:30:34.811088  624195 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 13:30:34.811141  624195 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 13:30:34.828229  624195 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 13:30:34.828324  624195 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 13:30:34.828347  624195 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 13:30:34.828363  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 13:30:34.828383  624195 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 13:30:34.828407  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 13:30:34.843958  624195 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 13:30:34.844008  624195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 13:30:35.711844  624195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0520 13:30:35.721772  624195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0520 13:30:35.739516  624195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:30:35.756221  624195 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 13:30:35.774613  624195 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 13:30:35.778519  624195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:30:35.790710  624195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:30:35.916011  624195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:30:35.933865  624195 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:30:35.934374  624195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:30:35.934441  624195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:30:35.950848  624195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37213
	I0520 13:30:35.951361  624195 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:30:35.951824  624195 main.go:141] libmachine: Using API Version  1
	I0520 13:30:35.951849  624195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:30:35.952191  624195 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:30:35.952474  624195 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:30:35.952720  624195 start.go:316] joinCluster: &{Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:30:35.952861  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0520 13:30:35.952885  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:30:35.956312  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:30:35.956776  624195 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:30:35.956808  624195 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:30:35.956971  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:30:35.957156  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:30:35.957328  624195 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:30:35.957489  624195 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:30:36.186912  624195 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:30:36.186977  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lsr1q3.gj6neebntzvy8le2 --discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-170194-m03 --control-plane --apiserver-advertise-address=192.168.39.3 --apiserver-bind-port=8443"
	I0520 13:31:05.011535  624195 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lsr1q3.gj6neebntzvy8le2 --discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-170194-m03 --control-plane --apiserver-advertise-address=192.168.39.3 --apiserver-bind-port=8443": (28.824526203s)
	I0520 13:31:05.011580  624195 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0520 13:31:05.524316  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-170194-m03 minikube.k8s.io/updated_at=2024_05_20T13_31_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45 minikube.k8s.io/name=ha-170194 minikube.k8s.io/primary=false
	I0520 13:31:05.658744  624195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-170194-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0520 13:31:05.798072  624195 start.go:318] duration metric: took 29.845347226s to joinCluster
	I0520 13:31:05.798171  624195 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:31:05.800581  624195 out.go:177] * Verifying Kubernetes components...
	I0520 13:31:05.798564  624195 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:31:05.802637  624195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:31:05.992517  624195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:31:06.013170  624195 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 13:31:06.013455  624195 kapi.go:59] client config for ha-170194: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.crt", KeyFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key", CAFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0520 13:31:06.013560  624195 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.92:8443
	I0520 13:31:06.013797  624195 node_ready.go:35] waiting up to 6m0s for node "ha-170194-m03" to be "Ready" ...
	I0520 13:31:06.013901  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:06.013911  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:06.013920  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:06.013929  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:06.017203  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:06.515041  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:06.515066  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:06.515075  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:06.515078  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:06.519172  624195 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 13:31:07.014089  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:07.014123  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:07.014135  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:07.014142  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:07.017702  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:07.514876  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:07.514902  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:07.514910  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:07.514913  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:07.518287  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:08.014401  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:08.014431  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:08.014440  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:08.014443  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:08.026363  624195 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0520 13:31:08.027598  624195 node_ready.go:53] node "ha-170194-m03" has status "Ready":"False"
	I0520 13:31:08.514624  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:08.514657  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:08.514666  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:08.514672  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:08.518249  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:09.014247  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:09.014273  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:09.014280  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:09.014285  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:09.017946  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:09.514146  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:09.514179  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:09.514190  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:09.514194  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:09.517927  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:10.014405  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:10.014430  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:10.014437  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:10.014442  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:10.018434  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:10.514854  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:10.514883  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:10.514898  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:10.514903  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:10.518625  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:10.519080  624195 node_ready.go:53] node "ha-170194-m03" has status "Ready":"False"
	I0520 13:31:11.014264  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:11.014285  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:11.014295  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:11.014300  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:11.018048  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:11.514545  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:11.514574  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:11.514584  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:11.514592  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:11.518182  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:12.014767  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:12.014791  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:12.014799  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:12.014803  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:12.018424  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:12.514459  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:12.514487  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:12.514496  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:12.514511  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:12.517977  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:13.014776  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:13.014799  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.014807  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.014812  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.018553  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:13.019163  624195 node_ready.go:49] node "ha-170194-m03" has status "Ready":"True"
	I0520 13:31:13.019186  624195 node_ready.go:38] duration metric: took 7.005369464s for node "ha-170194-m03" to be "Ready" ...
	I0520 13:31:13.019204  624195 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:31:13.019298  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:31:13.019310  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.019321  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.019332  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.030581  624195 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0520 13:31:13.037455  624195 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-s28r6" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.037554  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-s28r6
	I0520 13:31:13.037561  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.037572  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.037582  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.041871  624195 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 13:31:13.042775  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:13.042795  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.042802  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.042805  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.047300  624195 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 13:31:13.048039  624195 pod_ready.go:92] pod "coredns-7db6d8ff4d-s28r6" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:13.048065  624195 pod_ready.go:81] duration metric: took 10.575387ms for pod "coredns-7db6d8ff4d-s28r6" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.048078  624195 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vk78q" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.048164  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vk78q
	I0520 13:31:13.048175  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.048186  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.048191  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.052157  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:13.053021  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:13.053041  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.053051  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.053057  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.056084  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:13.056704  624195 pod_ready.go:92] pod "coredns-7db6d8ff4d-vk78q" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:13.056730  624195 pod_ready.go:81] duration metric: took 8.643405ms for pod "coredns-7db6d8ff4d-vk78q" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.056743  624195 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.056829  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194
	I0520 13:31:13.056841  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.056851  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.056856  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.060227  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:13.061330  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:13.061346  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.061353  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.061357  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.063748  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:31:13.064252  624195 pod_ready.go:92] pod "etcd-ha-170194" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:13.064272  624195 pod_ready.go:81] duration metric: took 7.521309ms for pod "etcd-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.064281  624195 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.064430  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m02
	I0520 13:31:13.064450  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.064462  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.064468  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.067471  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:31:13.068335  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:13.068352  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.068360  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.068365  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.070826  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:31:13.071358  624195 pod_ready.go:92] pod "etcd-ha-170194-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:13.071381  624195 pod_ready.go:81] duration metric: took 7.0933ms for pod "etcd-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.071390  624195 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170194-m03" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:13.215767  624195 request.go:629] Waited for 144.303996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m03
	I0520 13:31:13.215834  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m03
	I0520 13:31:13.215839  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.215847  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.215852  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.219854  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:13.415135  624195 request.go:629] Waited for 194.54887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:13.415199  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:13.415204  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.415212  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.415216  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.418216  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:31:13.615926  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m03
	I0520 13:31:13.615954  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.615966  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.615976  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.619529  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:13.815234  624195 request.go:629] Waited for 194.980132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:13.815321  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:13.815327  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:13.815335  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:13.815339  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:13.818403  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:14.072366  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m03
	I0520 13:31:14.072392  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:14.072400  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:14.072409  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:14.076193  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:14.215202  624195 request.go:629] Waited for 138.334855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:14.215274  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:14.215281  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:14.215293  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:14.215303  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:14.218526  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:14.571911  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m03
	I0520 13:31:14.571936  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:14.571944  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:14.571949  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:14.574981  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:14.615120  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:14.615144  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:14.615157  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:14.615163  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:14.619146  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:15.071983  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m03
	I0520 13:31:15.072025  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:15.072033  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:15.072039  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:15.075738  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:15.076783  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:15.076801  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:15.076813  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:15.076818  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:15.080319  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:15.081093  624195 pod_ready.go:102] pod "etcd-ha-170194-m03" in "kube-system" namespace has status "Ready":"False"
	I0520 13:31:15.572090  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m03
	I0520 13:31:15.572114  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:15.572121  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:15.572125  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:15.575935  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:15.577358  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:15.577374  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:15.577380  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:15.577383  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:15.580077  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:31:16.072328  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170194-m03
	I0520 13:31:16.072370  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:16.072388  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:16.072392  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:16.075823  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:16.076583  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:16.076601  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:16.076612  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:16.076618  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:16.079633  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:31:16.080376  624195 pod_ready.go:92] pod "etcd-ha-170194-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:16.080404  624195 pod_ready.go:81] duration metric: took 3.009005007s for pod "etcd-ha-170194-m03" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:16.080427  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:16.080516  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170194
	I0520 13:31:16.080528  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:16.080539  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:16.080545  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:16.083475  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:31:16.215453  624195 request.go:629] Waited for 131.322215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:16.215521  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:16.215526  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:16.215534  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:16.215537  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:16.218968  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:16.219547  624195 pod_ready.go:92] pod "kube-apiserver-ha-170194" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:16.219581  624195 pod_ready.go:81] duration metric: took 139.142475ms for pod "kube-apiserver-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:16.219600  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:16.414834  624195 request.go:629] Waited for 195.128013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170194-m02
	I0520 13:31:16.414904  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170194-m02
	I0520 13:31:16.414912  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:16.414924  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:16.414931  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:16.418491  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:16.615788  624195 request.go:629] Waited for 196.397178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:16.615904  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:16.615912  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:16.615920  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:16.615926  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:16.619495  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:16.620081  624195 pod_ready.go:92] pod "kube-apiserver-ha-170194-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:16.620102  624195 pod_ready.go:81] duration metric: took 400.491978ms for pod "kube-apiserver-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:16.620115  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170194-m03" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:16.815192  624195 request.go:629] Waited for 194.989325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170194-m03
	I0520 13:31:16.815261  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170194-m03
	I0520 13:31:16.815267  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:16.815274  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:16.815278  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:16.818421  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:17.015533  624195 request.go:629] Waited for 196.24531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:17.015607  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:17.015614  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:17.015624  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:17.015636  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:17.022248  624195 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 13:31:17.023408  624195 pod_ready.go:92] pod "kube-apiserver-ha-170194-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:17.023431  624195 pod_ready.go:81] duration metric: took 403.30886ms for pod "kube-apiserver-ha-170194-m03" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:17.023442  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:17.215734  624195 request.go:629] Waited for 192.175061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194
	I0520 13:31:17.215807  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194
	I0520 13:31:17.215815  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:17.215828  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:17.215836  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:17.219228  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:17.415232  624195 request.go:629] Waited for 195.384768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:17.415324  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:17.415332  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:17.415345  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:17.415355  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:17.419687  624195 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 13:31:17.420313  624195 pod_ready.go:92] pod "kube-controller-manager-ha-170194" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:17.420341  624195 pod_ready.go:81] duration metric: took 396.891022ms for pod "kube-controller-manager-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:17.420356  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:17.615311  624195 request.go:629] Waited for 194.86432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194-m02
	I0520 13:31:17.615384  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194-m02
	I0520 13:31:17.615390  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:17.615402  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:17.615409  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:17.619221  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:17.814817  624195 request.go:629] Waited for 194.943333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:17.814896  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:17.814901  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:17.814910  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:17.814917  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:17.818114  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:17.818728  624195 pod_ready.go:92] pod "kube-controller-manager-ha-170194-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:17.818754  624195 pod_ready.go:81] duration metric: took 398.390202ms for pod "kube-controller-manager-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:17.818768  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170194-m03" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:18.015719  624195 request.go:629] Waited for 196.878935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194-m03
	I0520 13:31:18.015788  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194-m03
	I0520 13:31:18.015793  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:18.015801  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:18.015804  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:18.019535  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:18.215470  624195 request.go:629] Waited for 195.360843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:18.215557  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:18.215562  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:18.215568  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:18.215573  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:18.219147  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:18.415659  624195 request.go:629] Waited for 96.287075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194-m03
	I0520 13:31:18.415765  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194-m03
	I0520 13:31:18.415779  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:18.415790  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:18.415801  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:18.419431  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:18.615483  624195 request.go:629] Waited for 195.37727ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:18.615548  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:18.615554  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:18.615562  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:18.615566  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:18.618117  624195 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 13:31:18.819673  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170194-m03
	I0520 13:31:18.819703  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:18.819714  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:18.819721  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:18.823309  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:19.015398  624195 request.go:629] Waited for 191.428653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:19.015458  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:19.015463  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:19.015471  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:19.015475  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:19.018833  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:19.019547  624195 pod_ready.go:92] pod "kube-controller-manager-ha-170194-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:19.019569  624195 pod_ready.go:81] duration metric: took 1.200793801s for pod "kube-controller-manager-ha-170194-m03" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:19.019585  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7ncvb" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:19.214958  624195 request.go:629] Waited for 195.280082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7ncvb
	I0520 13:31:19.215061  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7ncvb
	I0520 13:31:19.215069  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:19.215080  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:19.215087  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:19.218621  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:19.414947  624195 request.go:629] Waited for 195.319457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:19.415069  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:19.415083  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:19.415093  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:19.415102  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:19.418554  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:19.419277  624195 pod_ready.go:92] pod "kube-proxy-7ncvb" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:19.419309  624195 pod_ready.go:81] duration metric: took 399.714792ms for pod "kube-proxy-7ncvb" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:19.419324  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qth8f" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:19.615253  624195 request.go:629] Waited for 195.822388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qth8f
	I0520 13:31:19.615320  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qth8f
	I0520 13:31:19.615325  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:19.615334  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:19.615341  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:19.619457  624195 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 13:31:19.815371  624195 request.go:629] Waited for 194.935251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:19.815435  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:19.815441  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:19.815449  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:19.815454  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:19.819118  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:19.819715  624195 pod_ready.go:92] pod "kube-proxy-qth8f" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:19.819739  624195 pod_ready.go:81] duration metric: took 400.407376ms for pod "kube-proxy-qth8f" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:19.819749  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x79p4" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:20.015290  624195 request.go:629] Waited for 195.444697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x79p4
	I0520 13:31:20.015376  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x79p4
	I0520 13:31:20.015385  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:20.015396  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:20.015408  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:20.018963  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:20.214916  624195 request.go:629] Waited for 195.313944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:20.215022  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:20.215034  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:20.215045  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:20.215053  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:20.218191  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:20.218728  624195 pod_ready.go:92] pod "kube-proxy-x79p4" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:20.218749  624195 pod_ready.go:81] duration metric: took 398.99258ms for pod "kube-proxy-x79p4" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:20.218758  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:20.415324  624195 request.go:629] Waited for 196.464631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194
	I0520 13:31:20.415398  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194
	I0520 13:31:20.415406  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:20.415417  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:20.415428  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:20.418650  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:20.614968  624195 request.go:629] Waited for 195.495433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:20.615073  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194
	I0520 13:31:20.615083  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:20.615096  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:20.615105  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:20.618843  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:20.619666  624195 pod_ready.go:92] pod "kube-scheduler-ha-170194" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:20.619692  624195 pod_ready.go:81] duration metric: took 400.925254ms for pod "kube-scheduler-ha-170194" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:20.619706  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:20.815717  624195 request.go:629] Waited for 195.912804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194-m02
	I0520 13:31:20.815792  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194-m02
	I0520 13:31:20.815797  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:20.815805  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:20.815815  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:20.819303  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:21.015424  624195 request.go:629] Waited for 195.520036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:21.015488  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m02
	I0520 13:31:21.015493  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:21.015501  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:21.015505  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:21.018661  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:21.019331  624195 pod_ready.go:92] pod "kube-scheduler-ha-170194-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:21.019354  624195 pod_ready.go:81] duration metric: took 399.641422ms for pod "kube-scheduler-ha-170194-m02" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:21.019365  624195 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170194-m03" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:21.215514  624195 request.go:629] Waited for 196.051281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194-m03
	I0520 13:31:21.215610  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170194-m03
	I0520 13:31:21.215622  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:21.215633  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:21.215643  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:21.219132  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:21.415037  624195 request.go:629] Waited for 195.328033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:21.415119  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes/ha-170194-m03
	I0520 13:31:21.415181  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:21.415195  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:21.415200  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:21.419418  624195 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 13:31:21.420515  624195 pod_ready.go:92] pod "kube-scheduler-ha-170194-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 13:31:21.420541  624195 pod_ready.go:81] duration metric: took 401.168267ms for pod "kube-scheduler-ha-170194-m03" in "kube-system" namespace to be "Ready" ...
	I0520 13:31:21.420557  624195 pod_ready.go:38] duration metric: took 8.401336746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:31:21.420582  624195 api_server.go:52] waiting for apiserver process to appear ...
	I0520 13:31:21.420667  624195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:31:21.438240  624195 api_server.go:72] duration metric: took 15.640012749s to wait for apiserver process to appear ...
	I0520 13:31:21.438273  624195 api_server.go:88] waiting for apiserver healthz status ...
	I0520 13:31:21.438293  624195 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0520 13:31:21.442679  624195 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0520 13:31:21.442760  624195 round_trippers.go:463] GET https://192.168.39.92:8443/version
	I0520 13:31:21.442768  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:21.442775  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:21.442783  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:21.443594  624195 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0520 13:31:21.443657  624195 api_server.go:141] control plane version: v1.30.1
	I0520 13:31:21.443671  624195 api_server.go:131] duration metric: took 5.392584ms to wait for apiserver health ...
	I0520 13:31:21.443681  624195 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 13:31:21.615199  624195 request.go:629] Waited for 171.390196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:31:21.615275  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:31:21.615284  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:21.615295  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:21.615303  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:21.622356  624195 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 13:31:21.628951  624195 system_pods.go:59] 24 kube-system pods found
	I0520 13:31:21.628985  624195 system_pods.go:61] "coredns-7db6d8ff4d-s28r6" [b161e3ee-7969-4861-9778-3bc34356d792] Running
	I0520 13:31:21.628991  624195 system_pods.go:61] "coredns-7db6d8ff4d-vk78q" [334eb0ed-c771-4840-92d1-04c1b9ec5179] Running
	I0520 13:31:21.628995  624195 system_pods.go:61] "etcd-ha-170194" [ded07afc-66d3-496a-acd2-f802c27e5ea9] Running
	I0520 13:31:21.628998  624195 system_pods.go:61] "etcd-ha-170194-m02" [2c007933-6349-40d1-ac8c-05e5cc30d126] Running
	I0520 13:31:21.629002  624195 system_pods.go:61] "etcd-ha-170194-m03" [22d1124d-4ec7-4721-94d7-b05ee48e4f04] Running
	I0520 13:31:21.629005  624195 system_pods.go:61] "kindnet-5mg44" [5d873b63-664b-431b-8d06-3d5d69e3f6a5] Running
	I0520 13:31:21.629008  624195 system_pods.go:61] "kindnet-cmd8x" [44545bda-e29b-44d6-97f7-45290fda6e37] Running
	I0520 13:31:21.629011  624195 system_pods.go:61] "kindnet-q72lt" [1ff7bf65-cfec-4a8d-acb6-7177d005791f] Running
	I0520 13:31:21.629014  624195 system_pods.go:61] "kube-apiserver-ha-170194" [2700e177-dce9-4317-85a7-b067ebeebb90] Running
	I0520 13:31:21.629017  624195 system_pods.go:61] "kube-apiserver-ha-170194-m02" [78430b5f-64e7-413a-afd0-6eac06f49c9e] Running
	I0520 13:31:21.629022  624195 system_pods.go:61] "kube-apiserver-ha-170194-m03" [2ab83259-202f-4f75-97ae-7aba8a38638e] Running
	I0520 13:31:21.629025  624195 system_pods.go:61] "kube-controller-manager-ha-170194" [c4fc6771-8fe2-4ba4-96f6-f1a81c064745] Running
	I0520 13:31:21.629028  624195 system_pods.go:61] "kube-controller-manager-ha-170194-m02" [6104df8e-90a8-4a4f-8099-b8ae8bb5c37d] Running
	I0520 13:31:21.629032  624195 system_pods.go:61] "kube-controller-manager-ha-170194-m03" [91e02abe-a8d2-48b0-b883-7d5e2cd184ec] Running
	I0520 13:31:21.629035  624195 system_pods.go:61] "kube-proxy-7ncvb" [647cc75f-67fb-4921-812e-73f41fca3289] Running
	I0520 13:31:21.629038  624195 system_pods.go:61] "kube-proxy-qth8f" [fc43fd92-69c8-419e-9f78-0b5d489b561a] Running
	I0520 13:31:21.629041  624195 system_pods.go:61] "kube-proxy-x79p4" [20b12a4a-7f86-4521-9711-7b7efcf74995] Running
	I0520 13:31:21.629047  624195 system_pods.go:61] "kube-scheduler-ha-170194" [b4a2557e-86ae-4779-89a3-d0ab45de3ff4] Running
	I0520 13:31:21.629050  624195 system_pods.go:61] "kube-scheduler-ha-170194-m02" [6fb368ac-5c00-4114-b825-72db739a812b] Running
	I0520 13:31:21.629056  624195 system_pods.go:61] "kube-scheduler-ha-170194-m03" [5249cfdc-cb02-440e-aee3-a44444184426] Running
	I0520 13:31:21.629059  624195 system_pods.go:61] "kube-vip-ha-170194" [aed1bd37-f323-4950-b9d0-43e5e2eef5b7] Running
	I0520 13:31:21.629061  624195 system_pods.go:61] "kube-vip-ha-170194-m02" [8dd92207-13d4-4a34-83b9-73d84a185230] Running
	I0520 13:31:21.629067  624195 system_pods.go:61] "kube-vip-ha-170194-m03" [29f858fa-1de2-4632-ae1a-30847a60fa99] Running
	I0520 13:31:21.629072  624195 system_pods.go:61] "storage-provisioner" [ce0e094e-1f65-407a-9fc0-a3c55f9de344] Running
	I0520 13:31:21.629078  624195 system_pods.go:74] duration metric: took 185.392781ms to wait for pod list to return data ...
	I0520 13:31:21.629092  624195 default_sa.go:34] waiting for default service account to be created ...
	I0520 13:31:21.815526  624195 request.go:629] Waited for 186.337056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/default/serviceaccounts
	I0520 13:31:21.815589  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/default/serviceaccounts
	I0520 13:31:21.815600  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:21.815608  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:21.815613  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:21.819248  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:21.819417  624195 default_sa.go:45] found service account: "default"
	I0520 13:31:21.819441  624195 default_sa.go:55] duration metric: took 190.34107ms for default service account to be created ...
	I0520 13:31:21.819453  624195 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 13:31:22.014877  624195 request.go:629] Waited for 195.3227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:31:22.014940  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/namespaces/kube-system/pods
	I0520 13:31:22.014945  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:22.014953  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:22.014956  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:22.022443  624195 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 13:31:22.028413  624195 system_pods.go:86] 24 kube-system pods found
	I0520 13:31:22.028450  624195 system_pods.go:89] "coredns-7db6d8ff4d-s28r6" [b161e3ee-7969-4861-9778-3bc34356d792] Running
	I0520 13:31:22.028455  624195 system_pods.go:89] "coredns-7db6d8ff4d-vk78q" [334eb0ed-c771-4840-92d1-04c1b9ec5179] Running
	I0520 13:31:22.028460  624195 system_pods.go:89] "etcd-ha-170194" [ded07afc-66d3-496a-acd2-f802c27e5ea9] Running
	I0520 13:31:22.028465  624195 system_pods.go:89] "etcd-ha-170194-m02" [2c007933-6349-40d1-ac8c-05e5cc30d126] Running
	I0520 13:31:22.028469  624195 system_pods.go:89] "etcd-ha-170194-m03" [22d1124d-4ec7-4721-94d7-b05ee48e4f04] Running
	I0520 13:31:22.028473  624195 system_pods.go:89] "kindnet-5mg44" [5d873b63-664b-431b-8d06-3d5d69e3f6a5] Running
	I0520 13:31:22.028477  624195 system_pods.go:89] "kindnet-cmd8x" [44545bda-e29b-44d6-97f7-45290fda6e37] Running
	I0520 13:31:22.028481  624195 system_pods.go:89] "kindnet-q72lt" [1ff7bf65-cfec-4a8d-acb6-7177d005791f] Running
	I0520 13:31:22.028485  624195 system_pods.go:89] "kube-apiserver-ha-170194" [2700e177-dce9-4317-85a7-b067ebeebb90] Running
	I0520 13:31:22.028489  624195 system_pods.go:89] "kube-apiserver-ha-170194-m02" [78430b5f-64e7-413a-afd0-6eac06f49c9e] Running
	I0520 13:31:22.028493  624195 system_pods.go:89] "kube-apiserver-ha-170194-m03" [2ab83259-202f-4f75-97ae-7aba8a38638e] Running
	I0520 13:31:22.028497  624195 system_pods.go:89] "kube-controller-manager-ha-170194" [c4fc6771-8fe2-4ba4-96f6-f1a81c064745] Running
	I0520 13:31:22.028501  624195 system_pods.go:89] "kube-controller-manager-ha-170194-m02" [6104df8e-90a8-4a4f-8099-b8ae8bb5c37d] Running
	I0520 13:31:22.028509  624195 system_pods.go:89] "kube-controller-manager-ha-170194-m03" [91e02abe-a8d2-48b0-b883-7d5e2cd184ec] Running
	I0520 13:31:22.028513  624195 system_pods.go:89] "kube-proxy-7ncvb" [647cc75f-67fb-4921-812e-73f41fca3289] Running
	I0520 13:31:22.028517  624195 system_pods.go:89] "kube-proxy-qth8f" [fc43fd92-69c8-419e-9f78-0b5d489b561a] Running
	I0520 13:31:22.028521  624195 system_pods.go:89] "kube-proxy-x79p4" [20b12a4a-7f86-4521-9711-7b7efcf74995] Running
	I0520 13:31:22.028525  624195 system_pods.go:89] "kube-scheduler-ha-170194" [b4a2557e-86ae-4779-89a3-d0ab45de3ff4] Running
	I0520 13:31:22.028528  624195 system_pods.go:89] "kube-scheduler-ha-170194-m02" [6fb368ac-5c00-4114-b825-72db739a812b] Running
	I0520 13:31:22.028535  624195 system_pods.go:89] "kube-scheduler-ha-170194-m03" [5249cfdc-cb02-440e-aee3-a44444184426] Running
	I0520 13:31:22.028540  624195 system_pods.go:89] "kube-vip-ha-170194" [aed1bd37-f323-4950-b9d0-43e5e2eef5b7] Running
	I0520 13:31:22.028547  624195 system_pods.go:89] "kube-vip-ha-170194-m02" [8dd92207-13d4-4a34-83b9-73d84a185230] Running
	I0520 13:31:22.028550  624195 system_pods.go:89] "kube-vip-ha-170194-m03" [29f858fa-1de2-4632-ae1a-30847a60fa99] Running
	I0520 13:31:22.028555  624195 system_pods.go:89] "storage-provisioner" [ce0e094e-1f65-407a-9fc0-a3c55f9de344] Running
	I0520 13:31:22.028561  624195 system_pods.go:126] duration metric: took 209.098779ms to wait for k8s-apps to be running ...
	I0520 13:31:22.028573  624195 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 13:31:22.028622  624195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:31:22.045782  624195 system_svc.go:56] duration metric: took 17.199492ms WaitForService to wait for kubelet
	I0520 13:31:22.045815  624195 kubeadm.go:576] duration metric: took 16.247602675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 13:31:22.045835  624195 node_conditions.go:102] verifying NodePressure condition ...
	I0520 13:31:22.215313  624195 request.go:629] Waited for 169.380053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.92:8443/api/v1/nodes
	I0520 13:31:22.215376  624195 round_trippers.go:463] GET https://192.168.39.92:8443/api/v1/nodes
	I0520 13:31:22.215381  624195 round_trippers.go:469] Request Headers:
	I0520 13:31:22.215389  624195 round_trippers.go:473]     Accept: application/json, */*
	I0520 13:31:22.215394  624195 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 13:31:22.219272  624195 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 13:31:22.220152  624195 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 13:31:22.220187  624195 node_conditions.go:123] node cpu capacity is 2
	I0520 13:31:22.220199  624195 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 13:31:22.220203  624195 node_conditions.go:123] node cpu capacity is 2
	I0520 13:31:22.220207  624195 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 13:31:22.220210  624195 node_conditions.go:123] node cpu capacity is 2
	I0520 13:31:22.220214  624195 node_conditions.go:105] duration metric: took 174.37435ms to run NodePressure ...
	I0520 13:31:22.220228  624195 start.go:240] waiting for startup goroutines ...
	I0520 13:31:22.220258  624195 start.go:254] writing updated cluster config ...
	I0520 13:31:22.220619  624195 ssh_runner.go:195] Run: rm -f paused
	I0520 13:31:22.275580  624195 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 13:31:22.280049  624195 out.go:177] * Done! kubectl is now configured to use "ha-170194" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.681460723Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716212153681434695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76f8821f-0620-4029-a336-b65075035249 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.682083427Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33da0ac9-91ba-405b-9cbf-9fbe3555e8b2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.682152000Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33da0ac9-91ba-405b-9cbf-9fbe3555e8b2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.682393597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf740d9b5f06de9c1c7f18ff96ae4e526c6be4ceefd003d80cbf78d2a4a8de2e,PodSandboxId:85c1015ea36da8c46f104ff7ec479b35768c9d5a0cfc6b141e2355692dadb1b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716211886372227196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kubernetes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea85179fd0503aa9f2a864a7e621c5f12560217b135dee98c2f4cea9a4e5e59,PodSandboxId:109392450c01eea851e9af5e8cf4458df6fc6c142890a5c82c06a9767bc7d9d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716211730573499646,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4,PodSandboxId:cb6f21c242e20885a848e6e54b17b052b76485170e8d42e9306861f8b60b9773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211729633688588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583,PodSandboxId:901f35680bee52a4fdcd71f44cbdd95bb19dfa886f3ea6b532d85fee30bffbe9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211729610085205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-79
69-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef86504a6a21868d78111ee02e2dfffac4c0417e767d4026c693a1536b8b19d2,PodSandboxId:259c31fa9472ee17dcae28eb41a2503cf3c4fb8fb931b8a752f052c76b450a99,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17162117
27530589271,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b,PodSandboxId:ef9cc40406ad74740df8eabc268ef219b5ddbbd891b2bbcb11b6cb63e0559867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211727429880415,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334824a1ffd8bfbb3cb38a60c62bafa0a027c3c9f2d464b2aece3440327048f5,PodSandboxId:c2f00aa61309b82dd801eaee80f25d7456c5a1fb193b9cbd8f1ec4acdad1e5e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716211711072316006,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c2ce26b33d78866ea1447fa8f1385b,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2,PodSandboxId:0a5e941c6740dcbed3cd2281373ada1e41c0ae9be65f6fcbdf489717a76cdcf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211708078980294,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40d2be6b414dbb4aff49d9c9270179b8ef07ff0fd8215c35a69afa89c8b9a23,PodSandboxId:dfcd6dd7a8d331f382c0a93853e63559ce403346268f7c857edca59dfd91ff3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211708087228752,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8,PodSandboxId:1a02a71cebea30837cb4dfa8906ebe6d049ffc18862ce9371a6c5f6de47e2edf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211708025382815,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0dc1542ea21a6985bbc8673bdd0d29ddd6774c864aa059ea5d9aa8552ed47fa,PodSandboxId:e89b9ecab8ffc46ed6d073642eaca55de22955816f35f838c4127fc26356bd30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211707995210035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33da0ac9-91ba-405b-9cbf-9fbe3555e8b2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.740038097Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8d64aed-1d02-461f-aa2b-905c4adef66c name=/runtime.v1.RuntimeService/Version
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.740113370Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8d64aed-1d02-461f-aa2b-905c4adef66c name=/runtime.v1.RuntimeService/Version
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.741442194Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a448e5e1-e69e-418d-85e7-77bf5c44ccfe name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.742044409Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716212153742019079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a448e5e1-e69e-418d-85e7-77bf5c44ccfe name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.742452896Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a038c71-1427-47bc-8d9f-e15c71415cc8 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.742655623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a038c71-1427-47bc-8d9f-e15c71415cc8 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.743022346Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf740d9b5f06de9c1c7f18ff96ae4e526c6be4ceefd003d80cbf78d2a4a8de2e,PodSandboxId:85c1015ea36da8c46f104ff7ec479b35768c9d5a0cfc6b141e2355692dadb1b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716211886372227196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kubernetes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea85179fd0503aa9f2a864a7e621c5f12560217b135dee98c2f4cea9a4e5e59,PodSandboxId:109392450c01eea851e9af5e8cf4458df6fc6c142890a5c82c06a9767bc7d9d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716211730573499646,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4,PodSandboxId:cb6f21c242e20885a848e6e54b17b052b76485170e8d42e9306861f8b60b9773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211729633688588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583,PodSandboxId:901f35680bee52a4fdcd71f44cbdd95bb19dfa886f3ea6b532d85fee30bffbe9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211729610085205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-79
69-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef86504a6a21868d78111ee02e2dfffac4c0417e767d4026c693a1536b8b19d2,PodSandboxId:259c31fa9472ee17dcae28eb41a2503cf3c4fb8fb931b8a752f052c76b450a99,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17162117
27530589271,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b,PodSandboxId:ef9cc40406ad74740df8eabc268ef219b5ddbbd891b2bbcb11b6cb63e0559867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211727429880415,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334824a1ffd8bfbb3cb38a60c62bafa0a027c3c9f2d464b2aece3440327048f5,PodSandboxId:c2f00aa61309b82dd801eaee80f25d7456c5a1fb193b9cbd8f1ec4acdad1e5e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716211711072316006,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c2ce26b33d78866ea1447fa8f1385b,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2,PodSandboxId:0a5e941c6740dcbed3cd2281373ada1e41c0ae9be65f6fcbdf489717a76cdcf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211708078980294,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40d2be6b414dbb4aff49d9c9270179b8ef07ff0fd8215c35a69afa89c8b9a23,PodSandboxId:dfcd6dd7a8d331f382c0a93853e63559ce403346268f7c857edca59dfd91ff3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211708087228752,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8,PodSandboxId:1a02a71cebea30837cb4dfa8906ebe6d049ffc18862ce9371a6c5f6de47e2edf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211708025382815,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0dc1542ea21a6985bbc8673bdd0d29ddd6774c864aa059ea5d9aa8552ed47fa,PodSandboxId:e89b9ecab8ffc46ed6d073642eaca55de22955816f35f838c4127fc26356bd30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211707995210035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a038c71-1427-47bc-8d9f-e15c71415cc8 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.782726422Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b4002bf-b1fd-4dcd-9601-3e018f3b7940 name=/runtime.v1.RuntimeService/Version
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.782812018Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b4002bf-b1fd-4dcd-9601-3e018f3b7940 name=/runtime.v1.RuntimeService/Version
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.783603122Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ea30fee-1e67-4c70-95ad-cf33cade96ed name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.784072461Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716212153784049189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ea30fee-1e67-4c70-95ad-cf33cade96ed name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.784617982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=132f37bf-9b9e-4ac0-b4da-47c8a4bc5e3f name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.784710777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=132f37bf-9b9e-4ac0-b4da-47c8a4bc5e3f name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.784983052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf740d9b5f06de9c1c7f18ff96ae4e526c6be4ceefd003d80cbf78d2a4a8de2e,PodSandboxId:85c1015ea36da8c46f104ff7ec479b35768c9d5a0cfc6b141e2355692dadb1b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716211886372227196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kubernetes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea85179fd0503aa9f2a864a7e621c5f12560217b135dee98c2f4cea9a4e5e59,PodSandboxId:109392450c01eea851e9af5e8cf4458df6fc6c142890a5c82c06a9767bc7d9d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716211730573499646,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4,PodSandboxId:cb6f21c242e20885a848e6e54b17b052b76485170e8d42e9306861f8b60b9773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211729633688588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583,PodSandboxId:901f35680bee52a4fdcd71f44cbdd95bb19dfa886f3ea6b532d85fee30bffbe9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211729610085205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-79
69-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef86504a6a21868d78111ee02e2dfffac4c0417e767d4026c693a1536b8b19d2,PodSandboxId:259c31fa9472ee17dcae28eb41a2503cf3c4fb8fb931b8a752f052c76b450a99,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17162117
27530589271,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b,PodSandboxId:ef9cc40406ad74740df8eabc268ef219b5ddbbd891b2bbcb11b6cb63e0559867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211727429880415,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334824a1ffd8bfbb3cb38a60c62bafa0a027c3c9f2d464b2aece3440327048f5,PodSandboxId:c2f00aa61309b82dd801eaee80f25d7456c5a1fb193b9cbd8f1ec4acdad1e5e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716211711072316006,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c2ce26b33d78866ea1447fa8f1385b,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2,PodSandboxId:0a5e941c6740dcbed3cd2281373ada1e41c0ae9be65f6fcbdf489717a76cdcf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211708078980294,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40d2be6b414dbb4aff49d9c9270179b8ef07ff0fd8215c35a69afa89c8b9a23,PodSandboxId:dfcd6dd7a8d331f382c0a93853e63559ce403346268f7c857edca59dfd91ff3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211708087228752,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8,PodSandboxId:1a02a71cebea30837cb4dfa8906ebe6d049ffc18862ce9371a6c5f6de47e2edf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211708025382815,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0dc1542ea21a6985bbc8673bdd0d29ddd6774c864aa059ea5d9aa8552ed47fa,PodSandboxId:e89b9ecab8ffc46ed6d073642eaca55de22955816f35f838c4127fc26356bd30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211707995210035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=132f37bf-9b9e-4ac0-b4da-47c8a4bc5e3f name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.820310818Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82459116-a9bb-447e-b13b-b0e4ffad241d name=/runtime.v1.RuntimeService/Version
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.820430494Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82459116-a9bb-447e-b13b-b0e4ffad241d name=/runtime.v1.RuntimeService/Version
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.821487361Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82962236-be8d-4107-ac3e-1f521901ed2d name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.822744428Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716212153822709817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82962236-be8d-4107-ac3e-1f521901ed2d name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.823694236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77175fb1-45a9-469f-afa8-e39f7c477526 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.823779314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77175fb1-45a9-469f-afa8-e39f7c477526 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:35:53 ha-170194 crio[680]: time="2024-05-20 13:35:53.824094224Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf740d9b5f06de9c1c7f18ff96ae4e526c6be4ceefd003d80cbf78d2a4a8de2e,PodSandboxId:85c1015ea36da8c46f104ff7ec479b35768c9d5a0cfc6b141e2355692dadb1b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716211886372227196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kubernetes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea85179fd0503aa9f2a864a7e621c5f12560217b135dee98c2f4cea9a4e5e59,PodSandboxId:109392450c01eea851e9af5e8cf4458df6fc6c142890a5c82c06a9767bc7d9d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716211730573499646,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4,PodSandboxId:cb6f21c242e20885a848e6e54b17b052b76485170e8d42e9306861f8b60b9773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211729633688588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583,PodSandboxId:901f35680bee52a4fdcd71f44cbdd95bb19dfa886f3ea6b532d85fee30bffbe9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211729610085205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-79
69-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef86504a6a21868d78111ee02e2dfffac4c0417e767d4026c693a1536b8b19d2,PodSandboxId:259c31fa9472ee17dcae28eb41a2503cf3c4fb8fb931b8a752f052c76b450a99,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17162117
27530589271,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b,PodSandboxId:ef9cc40406ad74740df8eabc268ef219b5ddbbd891b2bbcb11b6cb63e0559867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211727429880415,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334824a1ffd8bfbb3cb38a60c62bafa0a027c3c9f2d464b2aece3440327048f5,PodSandboxId:c2f00aa61309b82dd801eaee80f25d7456c5a1fb193b9cbd8f1ec4acdad1e5e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716211711072316006,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c2ce26b33d78866ea1447fa8f1385b,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2,PodSandboxId:0a5e941c6740dcbed3cd2281373ada1e41c0ae9be65f6fcbdf489717a76cdcf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211708078980294,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40d2be6b414dbb4aff49d9c9270179b8ef07ff0fd8215c35a69afa89c8b9a23,PodSandboxId:dfcd6dd7a8d331f382c0a93853e63559ce403346268f7c857edca59dfd91ff3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211708087228752,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8,PodSandboxId:1a02a71cebea30837cb4dfa8906ebe6d049ffc18862ce9371a6c5f6de47e2edf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211708025382815,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0dc1542ea21a6985bbc8673bdd0d29ddd6774c864aa059ea5d9aa8552ed47fa,PodSandboxId:e89b9ecab8ffc46ed6d073642eaca55de22955816f35f838c4127fc26356bd30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211707995210035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77175fb1-45a9-469f-afa8-e39f7c477526 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cf740d9b5f06d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   85c1015ea36da       busybox-fc5497c4f-kn5pb
	9ea85179fd050       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   109392450c01e       storage-provisioner
	d3c1362d9012c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   cb6f21c242e20       coredns-7db6d8ff4d-vk78q
	6bd28e2e55305       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   901f35680bee5       coredns-7db6d8ff4d-s28r6
	ef86504a6a218       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago       Running             kindnet-cni               0                   259c31fa9472e       kindnet-cmd8x
	2ca782f6be5aa       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      7 minutes ago       Running             kube-proxy                0                   ef9cc40406ad7       kube-proxy-qth8f
	334824a1ffd8b       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   c2f00aa61309b       kube-vip-ha-170194
	e40d2be6b414d       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago       Running             kube-apiserver            0                   dfcd6dd7a8d33       kube-apiserver-ha-170194
	bd7f5eac64d8e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   0a5e941c6740d       etcd-ha-170194
	d125c402bd4cb       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago       Running             kube-scheduler            0                   1a02a71cebea3       kube-scheduler-ha-170194
	b0dc1542ea21a       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago       Running             kube-controller-manager   0                   e89b9ecab8ffc       kube-controller-manager-ha-170194
	
	
	==> coredns [6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583] <==
	[INFO] 127.0.0.1:40383 - 4867 "HINFO IN 2061741283489635823.1468648125148225089. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010760865s
	[INFO] 10.244.0.4:41834 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001000301s
	[INFO] 10.244.0.4:48478 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.015646089s
	[INFO] 10.244.0.4:56808 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.008004652s
	[INFO] 10.244.0.4:39580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113846s
	[INFO] 10.244.0.4:34499 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153438s
	[INFO] 10.244.0.4:47635 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003467859s
	[INFO] 10.244.0.4:37386 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211396s
	[INFO] 10.244.0.4:37274 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116452s
	[INFO] 10.244.1.2:33488 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156093s
	[INFO] 10.244.1.2:44452 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130005s
	[INFO] 10.244.2.2:54953 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000216728s
	[INFO] 10.244.2.2:41118 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098892s
	[INFO] 10.244.0.4:52970 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000086695s
	[INFO] 10.244.0.4:33272 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104087s
	[INFO] 10.244.0.4:47074 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061643s
	[INFO] 10.244.1.2:46181 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125314s
	[INFO] 10.244.1.2:60651 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114425s
	[INFO] 10.244.2.2:39831 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092598s
	[INFO] 10.244.2.2:36745 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009346s
	[INFO] 10.244.0.4:58943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126961s
	[INFO] 10.244.0.4:51569 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093816s
	[INFO] 10.244.0.4:33771 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095037s
	[INFO] 10.244.1.2:51959 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152608s
	[INFO] 10.244.2.2:41273 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085919s
	
	
	==> coredns [d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4] <==
	[INFO] 10.244.0.4:60912 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096722s
	[INFO] 10.244.1.2:39690 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002116705s
	[INFO] 10.244.1.2:39465 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205731s
	[INFO] 10.244.1.2:48674 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104027s
	[INFO] 10.244.1.2:42811 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001662979s
	[INFO] 10.244.1.2:55637 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155358s
	[INFO] 10.244.1.2:34282 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105391s
	[INFO] 10.244.2.2:55675 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129728s
	[INFO] 10.244.2.2:33579 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001845622s
	[INFO] 10.244.2.2:38991 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087704s
	[INFO] 10.244.2.2:60832 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001368991s
	[INFO] 10.244.2.2:49213 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064756s
	[INFO] 10.244.2.2:54664 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073817s
	[INFO] 10.244.0.4:58834 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096728s
	[INFO] 10.244.1.2:58412 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081949s
	[INFO] 10.244.1.2:52492 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085342s
	[INFO] 10.244.2.2:34598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011731s
	[INFO] 10.244.2.2:59375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131389s
	[INFO] 10.244.0.4:33373 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000185564s
	[INFO] 10.244.1.2:38899 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131605s
	[INFO] 10.244.1.2:39420 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000251117s
	[INFO] 10.244.1.2:39569 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000142225s
	[INFO] 10.244.2.2:33399 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185075s
	[INFO] 10.244.2.2:48490 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100278s
	[INFO] 10.244.2.2:35988 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115036s
	
	
	==> describe nodes <==
	Name:               ha-170194
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170194
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=ha-170194
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T13_28_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:28:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170194
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:35:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:31:38 +0000   Mon, 20 May 2024 13:28:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:31:38 +0000   Mon, 20 May 2024 13:28:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:31:38 +0000   Mon, 20 May 2024 13:28:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:31:38 +0000   Mon, 20 May 2024 13:28:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-170194
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c0123e982bf4840b6eb6a3f175c7438
	  System UUID:                4c0123e9-82bf-4840-b6eb-6a3f175c7438
	  Boot ID:                    37123cd6-de29-4d66-9faf-c58bcb2e7628
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kn5pb              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 coredns-7db6d8ff4d-s28r6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m7s
	  kube-system                 coredns-7db6d8ff4d-vk78q             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m7s
	  kube-system                 etcd-ha-170194                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m20s
	  kube-system                 kindnet-cmd8x                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m8s
	  kube-system                 kube-apiserver-ha-170194             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-controller-manager-ha-170194    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-proxy-qth8f                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 kube-scheduler-ha-170194             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-vip-ha-170194                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m6s   kube-proxy       
	  Normal  Starting                 7m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m20s  kubelet          Node ha-170194 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m20s  kubelet          Node ha-170194 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m20s  kubelet          Node ha-170194 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m8s   node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	  Normal  NodeReady                7m5s   kubelet          Node ha-170194 status is now: NodeReady
	  Normal  RegisteredNode           5m51s  node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	  Normal  RegisteredNode           4m35s  node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	
	
	Name:               ha-170194-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170194-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=ha-170194
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T13_29_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:29:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170194-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:32:18 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 20 May 2024 13:31:48 +0000   Mon, 20 May 2024 13:32:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 20 May 2024 13:31:48 +0000   Mon, 20 May 2024 13:32:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 20 May 2024 13:31:48 +0000   Mon, 20 May 2024 13:32:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 20 May 2024 13:31:48 +0000   Mon, 20 May 2024 13:32:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.155
	  Hostname:    ha-170194-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dcdee518e92c4c0ba5f3ba763f746ea2
	  System UUID:                dcdee518-e92c-4c0b-a5f3-ba763f746ea2
	  Boot ID:                    c436c0af-64d9-48ee-9d47-d67d9b728b14
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tmq2s                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 etcd-ha-170194-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m7s
	  kube-system                 kindnet-5mg44                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m9s
	  kube-system                 kube-apiserver-ha-170194-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-controller-manager-ha-170194-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-proxy-7ncvb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-scheduler-ha-170194-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-vip-ha-170194-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m5s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  6m9s (x8 over 6m9s)  kubelet          Node ha-170194-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m9s (x8 over 6m9s)  kubelet          Node ha-170194-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m9s (x7 over 6m9s)  kubelet          Node ha-170194-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m8s                 node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  RegisteredNode           5m51s                node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  RegisteredNode           4m35s                node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  NodeNotReady             2m55s                node-controller  Node ha-170194-m02 status is now: NodeNotReady
	
	
	Name:               ha-170194-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170194-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=ha-170194
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T13_31_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:31:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170194-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:35:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:31:31 +0000   Mon, 20 May 2024 13:31:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:31:31 +0000   Mon, 20 May 2024 13:31:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:31:31 +0000   Mon, 20 May 2024 13:31:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:31:31 +0000   Mon, 20 May 2024 13:31:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    ha-170194-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 64924ff33ca44b9f8535eb50161a056c
	  System UUID:                64924ff3-3ca4-4b9f-8535-eb50161a056c
	  Boot ID:                    98d78edd-8ff8-4cb4-b546-ec91b16aa0c4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vr9tf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 etcd-ha-170194-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m51s
	  kube-system                 kindnet-q72lt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m53s
	  kube-system                 kube-apiserver-ha-170194-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-controller-manager-ha-170194-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-proxy-x79p4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-scheduler-ha-170194-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-vip-ha-170194-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m48s                  kube-proxy       
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-170194-m03 event: Registered Node ha-170194-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m53s (x8 over 4m53s)  kubelet          Node ha-170194-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m53s (x8 over 4m53s)  kubelet          Node ha-170194-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m53s (x7 over 4m53s)  kubelet          Node ha-170194-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-170194-m03 event: Registered Node ha-170194-m03 in Controller
	  Normal  RegisteredNode           4m35s                  node-controller  Node ha-170194-m03 event: Registered Node ha-170194-m03 in Controller
	
	
	Name:               ha-170194-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170194-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=ha-170194
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T13_31_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:31:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170194-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:35:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:32:28 +0000   Mon, 20 May 2024 13:31:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:32:28 +0000   Mon, 20 May 2024 13:31:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:32:28 +0000   Mon, 20 May 2024 13:31:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:32:28 +0000   Mon, 20 May 2024 13:32:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.163
	  Hostname:    ha-170194-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 04786f3c085342e689c4ca279f442854
	  System UUID:                04786f3c-0853-42e6-89c4-ca279f442854
	  Boot ID:                    d9185916-82d4-4a95-9131-2ebf014960ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-98pk9       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m56s
	  kube-system                 kube-proxy-52pf8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m50s                  kube-proxy       
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m56s (x3 over 3m57s)  kubelet          Node ha-170194-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x3 over 3m57s)  kubelet          Node ha-170194-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x3 over 3m57s)  kubelet          Node ha-170194-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal  NodeReady                3m46s                  kubelet          Node ha-170194-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[May20 13:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051728] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037583] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[May20 13:28] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.729382] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.644635] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.658967] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.056188] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056574] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.149929] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.138520] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.255022] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +3.918021] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.231733] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.055898] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.968265] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.072694] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.206801] kauditd_printk_skb: 21 callbacks suppressed
	[May20 13:29] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2] <==
	{"level":"warn","ts":"2024-05-20T13:35:54.023095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.094279Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.109582Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.116374Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.119732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.122866Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.132831Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.141678Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.149461Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.153717Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.157073Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.167095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.173088Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.17999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.183899Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.186835Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.19945Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.204651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.211078Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.214625Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.21759Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.223005Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.223209Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.22855Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T13:35:54.233991Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d468df581a6d993d","from":"d468df581a6d993d","remote-peer-id":"990f835a719da62f","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 13:35:54 up 7 min,  0 users,  load average: 0.68, 0.53, 0.27
	Linux ha-170194 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ef86504a6a21868d78111ee02e2dfffac4c0417e767d4026c693a1536b8b19d2] <==
	I0520 13:35:19.110677       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	I0520 13:35:29.117117       1 main.go:223] Handling node with IPs: map[192.168.39.92:{}]
	I0520 13:35:29.117265       1 main.go:227] handling current node
	I0520 13:35:29.117315       1 main.go:223] Handling node with IPs: map[192.168.39.155:{}]
	I0520 13:35:29.117337       1 main.go:250] Node ha-170194-m02 has CIDR [10.244.1.0/24] 
	I0520 13:35:29.117455       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I0520 13:35:29.117488       1 main.go:250] Node ha-170194-m03 has CIDR [10.244.2.0/24] 
	I0520 13:35:29.117554       1 main.go:223] Handling node with IPs: map[192.168.39.163:{}]
	I0520 13:35:29.117587       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	I0520 13:35:39.129809       1 main.go:223] Handling node with IPs: map[192.168.39.92:{}]
	I0520 13:35:39.129899       1 main.go:227] handling current node
	I0520 13:35:39.129978       1 main.go:223] Handling node with IPs: map[192.168.39.155:{}]
	I0520 13:35:39.129999       1 main.go:250] Node ha-170194-m02 has CIDR [10.244.1.0/24] 
	I0520 13:35:39.130110       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I0520 13:35:39.130133       1 main.go:250] Node ha-170194-m03 has CIDR [10.244.2.0/24] 
	I0520 13:35:39.130196       1 main.go:223] Handling node with IPs: map[192.168.39.163:{}]
	I0520 13:35:39.130215       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	I0520 13:35:49.137437       1 main.go:223] Handling node with IPs: map[192.168.39.92:{}]
	I0520 13:35:49.137559       1 main.go:227] handling current node
	I0520 13:35:49.137589       1 main.go:223] Handling node with IPs: map[192.168.39.155:{}]
	I0520 13:35:49.137608       1 main.go:250] Node ha-170194-m02 has CIDR [10.244.1.0/24] 
	I0520 13:35:49.137765       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I0520 13:35:49.137788       1 main.go:250] Node ha-170194-m03 has CIDR [10.244.2.0/24] 
	I0520 13:35:49.137853       1 main.go:223] Handling node with IPs: map[192.168.39.163:{}]
	I0520 13:35:49.137875       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e40d2be6b414dbb4aff49d9c9270179b8ef07ff0fd8215c35a69afa89c8b9a23] <==
	I0520 13:28:33.025295       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 13:28:34.336136       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 13:28:34.371650       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0520 13:28:34.387581       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 13:28:46.732023       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0520 13:28:46.983015       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0520 13:31:02.034696       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0520 13:31:02.035118       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0520 13:31:02.035007       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 30.154µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0520 13:31:02.036523       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0520 13:31:02.036670       1 timeout.go:142] post-timeout activity - time-elapsed: 2.126749ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0520 13:31:27.831506       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60936: use of closed network connection
	E0520 13:31:28.040648       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60962: use of closed network connection
	E0520 13:31:28.239126       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60992: use of closed network connection
	E0520 13:31:28.433777       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32776: use of closed network connection
	E0520 13:31:28.614842       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32796: use of closed network connection
	E0520 13:31:28.993178       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32842: use of closed network connection
	E0520 13:31:29.182707       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32858: use of closed network connection
	E0520 13:31:29.361088       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32882: use of closed network connection
	E0520 13:31:29.673009       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32910: use of closed network connection
	E0520 13:31:29.868652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32918: use of closed network connection
	E0520 13:31:30.079857       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32934: use of closed network connection
	E0520 13:31:30.271882       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32948: use of closed network connection
	E0520 13:31:30.453468       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32964: use of closed network connection
	E0520 13:31:30.632204       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32984: use of closed network connection
	
	
	==> kube-controller-manager [b0dc1542ea21a6985bbc8673bdd0d29ddd6774c864aa059ea5d9aa8552ed47fa] <==
	I0520 13:31:01.275627       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-170194-m03"
	I0520 13:31:23.271341       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.578576ms"
	I0520 13:31:23.304580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.007878ms"
	I0520 13:31:23.304972       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="268.754µs"
	I0520 13:31:23.311369       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.747µs"
	I0520 13:31:23.455290       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.439314ms"
	I0520 13:31:23.715172       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="256.9714ms"
	I0520 13:31:23.715260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.796µs"
	I0520 13:31:23.752201       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.784284ms"
	I0520 13:31:23.754347       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.648µs"
	I0520 13:31:24.274453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.539µs"
	I0520 13:31:26.989382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.825873ms"
	I0520 13:31:26.989510       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.06µs"
	I0520 13:31:27.054021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.119458ms"
	I0520 13:31:27.056157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.965µs"
	I0520 13:31:27.352759       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.205088ms"
	I0520 13:31:27.352987       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.183µs"
	E0520 13:31:57.804237       1 certificate_controller.go:146] Sync csr-2hqlq failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2hqlq": the object has been modified; please apply your changes to the latest version and try again
	I0520 13:31:58.120165       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-170194-m04\" does not exist"
	I0520 13:31:58.167325       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-170194-m04" podCIDRs=["10.244.3.0/24"]
	I0520 13:32:01.303697       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-170194-m04"
	I0520 13:32:08.228987       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-170194-m04"
	I0520 13:32:59.762786       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-170194-m04"
	I0520 13:32:59.893315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.632435ms"
	I0520 13:32:59.893524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.369µs"
	
	
	==> kube-proxy [2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b] <==
	I0520 13:28:47.863207       1 server_linux.go:69] "Using iptables proxy"
	I0520 13:28:47.879661       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.92"]
	I0520 13:28:47.984212       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 13:28:47.984972       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 13:28:47.985030       1 server_linux.go:165] "Using iptables Proxier"
	I0520 13:28:47.989343       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 13:28:47.989639       1 server.go:872] "Version info" version="v1.30.1"
	I0520 13:28:47.989658       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:28:47.992466       1 config.go:192] "Starting service config controller"
	I0520 13:28:47.992490       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 13:28:47.993825       1 config.go:101] "Starting endpoint slice config controller"
	I0520 13:28:47.993844       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 13:28:47.997010       1 config.go:319] "Starting node config controller"
	I0520 13:28:47.998321       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 13:28:48.092782       1 shared_informer.go:320] Caches are synced for service config
	I0520 13:28:48.098009       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 13:28:48.098469       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8] <==
	W0520 13:28:32.301182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 13:28:32.301302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 13:28:32.327409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 13:28:32.327569       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 13:28:32.466501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 13:28:32.466603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 13:28:32.582560       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 13:28:32.582686       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0520 13:28:35.451216       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0520 13:31:01.251895       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-x79p4\": pod kube-proxy-x79p4 is already assigned to node \"ha-170194-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-x79p4" node="ha-170194-m03"
	E0520 13:31:01.252292       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 20b12a4a-7f86-4521-9711-7b7efcf74995(kube-system/kube-proxy-x79p4) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-x79p4"
	E0520 13:31:01.252358       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-x79p4\": pod kube-proxy-x79p4 is already assigned to node \"ha-170194-m03\"" pod="kube-system/kube-proxy-x79p4"
	I0520 13:31:01.252425       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-x79p4" node="ha-170194-m03"
	E0520 13:31:23.276303       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kn5pb\": pod busybox-fc5497c4f-kn5pb is already assigned to node \"ha-170194\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-kn5pb" node="ha-170194"
	E0520 13:31:23.276385       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bc78b16d-ff4a-4bb6-9a1e-62f31641b442(default/busybox-fc5497c4f-kn5pb) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-kn5pb"
	E0520 13:31:23.276417       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kn5pb\": pod busybox-fc5497c4f-kn5pb is already assigned to node \"ha-170194\"" pod="default/busybox-fc5497c4f-kn5pb"
	I0520 13:31:23.276437       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-kn5pb" node="ha-170194"
	E0520 13:31:58.307794       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5lzhk\": pod kube-proxy-5lzhk is already assigned to node \"ha-170194-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5lzhk" node="ha-170194-m04"
	E0520 13:31:58.307956       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9038c429-a368-45d7-9a3c-cdc8e614b0bb(kube-system/kube-proxy-5lzhk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5lzhk"
	E0520 13:31:58.308020       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5lzhk\": pod kube-proxy-5lzhk is already assigned to node \"ha-170194-m04\"" pod="kube-system/kube-proxy-5lzhk"
	I0520 13:31:58.308071       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5lzhk" node="ha-170194-m04"
	E0520 13:31:58.318254       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vbq7d\": pod kindnet-vbq7d is already assigned to node \"ha-170194-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vbq7d" node="ha-170194-m04"
	E0520 13:31:58.318443       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 8f46515c-2976-4142-8053-d41e78ea4f8b(kube-system/kindnet-vbq7d) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-vbq7d"
	E0520 13:31:58.318571       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vbq7d\": pod kindnet-vbq7d is already assigned to node \"ha-170194-m04\"" pod="kube-system/kindnet-vbq7d"
	I0520 13:31:58.318665       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vbq7d" node="ha-170194-m04"
	
	
	==> kubelet <==
	May 20 13:31:34 ha-170194 kubelet[1373]: E0520 13:31:34.276868    1373 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:31:34 ha-170194 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:31:34 ha-170194 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:31:34 ha-170194 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:31:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:32:34 ha-170194 kubelet[1373]: E0520 13:32:34.276768    1373 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:32:34 ha-170194 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:32:34 ha-170194 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:32:34 ha-170194 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:32:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:33:34 ha-170194 kubelet[1373]: E0520 13:33:34.276790    1373 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:33:34 ha-170194 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:33:34 ha-170194 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:33:34 ha-170194 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:33:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:34:34 ha-170194 kubelet[1373]: E0520 13:34:34.277139    1373 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:34:34 ha-170194 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:34:34 ha-170194 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:34:34 ha-170194 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:34:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:35:34 ha-170194 kubelet[1373]: E0520 13:35:34.281944    1373 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:35:34 ha-170194 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:35:34 ha-170194 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:35:34 ha-170194 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:35:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-170194 -n ha-170194
helpers_test.go:261: (dbg) Run:  kubectl --context ha-170194 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (61.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (384.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-170194 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-170194 -v=7 --alsologtostderr
E0520 13:36:59.760200  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 13:37:27.445308  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-170194 -v=7 --alsologtostderr: exit status 82 (2m1.880637816s)

                                                
                                                
-- stdout --
	* Stopping node "ha-170194-m04"  ...
	* Stopping node "ha-170194-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:35:55.735853  629988 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:35:55.736122  629988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:35:55.736135  629988 out.go:304] Setting ErrFile to fd 2...
	I0520 13:35:55.736142  629988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:35:55.736406  629988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:35:55.736751  629988 out.go:298] Setting JSON to false
	I0520 13:35:55.736869  629988 mustload.go:65] Loading cluster: ha-170194
	I0520 13:35:55.737289  629988 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:35:55.737382  629988 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:35:55.737555  629988 mustload.go:65] Loading cluster: ha-170194
	I0520 13:35:55.737678  629988 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:35:55.737702  629988 stop.go:39] StopHost: ha-170194-m04
	I0520 13:35:55.738038  629988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:55.738092  629988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:55.753005  629988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38907
	I0520 13:35:55.753484  629988 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:55.754055  629988 main.go:141] libmachine: Using API Version  1
	I0520 13:35:55.754078  629988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:55.754471  629988 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:55.757976  629988 out.go:177] * Stopping node "ha-170194-m04"  ...
	I0520 13:35:55.760292  629988 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 13:35:55.760332  629988 main.go:141] libmachine: (ha-170194-m04) Calling .DriverName
	I0520 13:35:55.760608  629988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 13:35:55.760646  629988 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHHostname
	I0520 13:35:55.763923  629988 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:55.764400  629988 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:31:45 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:35:55.764429  629988 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:35:55.764620  629988 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHPort
	I0520 13:35:55.764775  629988 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHKeyPath
	I0520 13:35:55.764941  629988 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHUsername
	I0520 13:35:55.765060  629988 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m04/id_rsa Username:docker}
	I0520 13:35:55.847560  629988 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 13:35:55.900117  629988 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0520 13:35:55.953977  629988 main.go:141] libmachine: Stopping "ha-170194-m04"...
	I0520 13:35:55.954006  629988 main.go:141] libmachine: (ha-170194-m04) Calling .GetState
	I0520 13:35:55.955595  629988 main.go:141] libmachine: (ha-170194-m04) Calling .Stop
	I0520 13:35:55.959216  629988 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 0/120
	I0520 13:35:57.127648  629988 main.go:141] libmachine: (ha-170194-m04) Calling .GetState
	I0520 13:35:57.129203  629988 main.go:141] libmachine: Machine "ha-170194-m04" was stopped.
	I0520 13:35:57.129220  629988 stop.go:75] duration metric: took 1.368933535s to stop
	I0520 13:35:57.129241  629988 stop.go:39] StopHost: ha-170194-m03
	I0520 13:35:57.129636  629988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:35:57.129684  629988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:35:57.146182  629988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35391
	I0520 13:35:57.146807  629988 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:35:57.147425  629988 main.go:141] libmachine: Using API Version  1
	I0520 13:35:57.147452  629988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:35:57.147855  629988 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:35:57.152184  629988 out.go:177] * Stopping node "ha-170194-m03"  ...
	I0520 13:35:57.154493  629988 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 13:35:57.154558  629988 main.go:141] libmachine: (ha-170194-m03) Calling .DriverName
	I0520 13:35:57.154910  629988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 13:35:57.154942  629988 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHHostname
	I0520 13:35:57.158553  629988 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:57.159161  629988 main.go:141] libmachine: (ha-170194-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:7b:a7", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:30:19 +0000 UTC Type:0 Mac:52:54:00:f7:7b:a7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-170194-m03 Clientid:01:52:54:00:f7:7b:a7}
	I0520 13:35:57.159210  629988 main.go:141] libmachine: (ha-170194-m03) DBG | domain ha-170194-m03 has defined IP address 192.168.39.3 and MAC address 52:54:00:f7:7b:a7 in network mk-ha-170194
	I0520 13:35:57.159468  629988 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHPort
	I0520 13:35:57.159651  629988 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHKeyPath
	I0520 13:35:57.159806  629988 main.go:141] libmachine: (ha-170194-m03) Calling .GetSSHUsername
	I0520 13:35:57.159936  629988 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m03/id_rsa Username:docker}
	I0520 13:35:57.245146  629988 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 13:35:57.297961  629988 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0520 13:35:57.352629  629988 main.go:141] libmachine: Stopping "ha-170194-m03"...
	I0520 13:35:57.352660  629988 main.go:141] libmachine: (ha-170194-m03) Calling .GetState
	I0520 13:35:57.354367  629988 main.go:141] libmachine: (ha-170194-m03) Calling .Stop
	I0520 13:35:57.357789  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 0/120
	I0520 13:35:58.360107  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 1/120
	I0520 13:35:59.361830  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 2/120
	I0520 13:36:00.363872  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 3/120
	I0520 13:36:01.365608  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 4/120
	I0520 13:36:02.367751  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 5/120
	I0520 13:36:03.369176  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 6/120
	I0520 13:36:04.370631  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 7/120
	I0520 13:36:05.372142  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 8/120
	I0520 13:36:06.373856  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 9/120
	I0520 13:36:07.376251  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 10/120
	I0520 13:36:08.377646  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 11/120
	I0520 13:36:09.379253  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 12/120
	I0520 13:36:10.380761  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 13/120
	I0520 13:36:11.382423  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 14/120
	I0520 13:36:12.384820  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 15/120
	I0520 13:36:13.386660  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 16/120
	I0520 13:36:14.388313  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 17/120
	I0520 13:36:15.390407  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 18/120
	I0520 13:36:16.391991  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 19/120
	I0520 13:36:17.394403  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 20/120
	I0520 13:36:18.396329  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 21/120
	I0520 13:36:19.398076  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 22/120
	I0520 13:36:20.399551  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 23/120
	I0520 13:36:21.401525  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 24/120
	I0520 13:36:22.403606  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 25/120
	I0520 13:36:23.405425  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 26/120
	I0520 13:36:24.406870  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 27/120
	I0520 13:36:25.408280  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 28/120
	I0520 13:36:26.409816  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 29/120
	I0520 13:36:27.411917  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 30/120
	I0520 13:36:28.413413  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 31/120
	I0520 13:36:29.415051  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 32/120
	I0520 13:36:30.416991  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 33/120
	I0520 13:36:31.418527  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 34/120
	I0520 13:36:32.420301  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 35/120
	I0520 13:36:33.422046  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 36/120
	I0520 13:36:34.423360  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 37/120
	I0520 13:36:35.424767  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 38/120
	I0520 13:36:36.426216  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 39/120
	I0520 13:36:37.428484  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 40/120
	I0520 13:36:38.429800  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 41/120
	I0520 13:36:39.431292  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 42/120
	I0520 13:36:40.432626  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 43/120
	I0520 13:36:41.435079  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 44/120
	I0520 13:36:42.436901  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 45/120
	I0520 13:36:43.438319  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 46/120
	I0520 13:36:44.439765  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 47/120
	I0520 13:36:45.441234  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 48/120
	I0520 13:36:46.442697  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 49/120
	I0520 13:36:47.444259  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 50/120
	I0520 13:36:48.445865  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 51/120
	I0520 13:36:49.447449  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 52/120
	I0520 13:36:50.449108  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 53/120
	I0520 13:36:51.450864  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 54/120
	I0520 13:36:52.453180  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 55/120
	I0520 13:36:53.455559  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 56/120
	I0520 13:36:54.457239  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 57/120
	I0520 13:36:55.458896  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 58/120
	I0520 13:36:56.460413  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 59/120
	I0520 13:36:57.462303  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 60/120
	I0520 13:36:58.463771  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 61/120
	I0520 13:36:59.465565  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 62/120
	I0520 13:37:00.467141  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 63/120
	I0520 13:37:01.468717  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 64/120
	I0520 13:37:02.470483  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 65/120
	I0520 13:37:03.472048  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 66/120
	I0520 13:37:04.473676  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 67/120
	I0520 13:37:05.475432  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 68/120
	I0520 13:37:06.477053  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 69/120
	I0520 13:37:07.479022  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 70/120
	I0520 13:37:08.480511  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 71/120
	I0520 13:37:09.482046  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 72/120
	I0520 13:37:10.484081  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 73/120
	I0520 13:37:11.485764  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 74/120
	I0520 13:37:12.487809  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 75/120
	I0520 13:37:13.489109  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 76/120
	I0520 13:37:14.490587  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 77/120
	I0520 13:37:15.492959  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 78/120
	I0520 13:37:16.494488  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 79/120
	I0520 13:37:17.496427  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 80/120
	I0520 13:37:18.497849  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 81/120
	I0520 13:37:19.499530  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 82/120
	I0520 13:37:20.500848  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 83/120
	I0520 13:37:21.502495  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 84/120
	I0520 13:37:22.504180  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 85/120
	I0520 13:37:23.506028  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 86/120
	I0520 13:37:24.507553  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 87/120
	I0520 13:37:25.509111  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 88/120
	I0520 13:37:26.510822  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 89/120
	I0520 13:37:27.512734  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 90/120
	I0520 13:37:28.514193  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 91/120
	I0520 13:37:29.515697  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 92/120
	I0520 13:37:30.517229  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 93/120
	I0520 13:37:31.518748  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 94/120
	I0520 13:37:32.520235  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 95/120
	I0520 13:37:33.521541  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 96/120
	I0520 13:37:34.522810  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 97/120
	I0520 13:37:35.524177  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 98/120
	I0520 13:37:36.525740  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 99/120
	I0520 13:37:37.527534  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 100/120
	I0520 13:37:38.528889  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 101/120
	I0520 13:37:39.530260  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 102/120
	I0520 13:37:40.531729  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 103/120
	I0520 13:37:41.533130  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 104/120
	I0520 13:37:42.534564  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 105/120
	I0520 13:37:43.535979  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 106/120
	I0520 13:37:44.537363  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 107/120
	I0520 13:37:45.538833  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 108/120
	I0520 13:37:46.540332  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 109/120
	I0520 13:37:47.541969  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 110/120
	I0520 13:37:48.543394  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 111/120
	I0520 13:37:49.545615  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 112/120
	I0520 13:37:50.546968  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 113/120
	I0520 13:37:51.548268  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 114/120
	I0520 13:37:52.550251  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 115/120
	I0520 13:37:53.552144  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 116/120
	I0520 13:37:54.553523  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 117/120
	I0520 13:37:55.555121  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 118/120
	I0520 13:37:56.556531  629988 main.go:141] libmachine: (ha-170194-m03) Waiting for machine to stop 119/120
	I0520 13:37:57.557573  629988 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0520 13:37:57.557626  629988 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0520 13:37:57.560518  629988 out.go:177] 
	W0520 13:37:57.562964  629988 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0520 13:37:57.562998  629988 out.go:239] * 
	* 
	W0520 13:37:57.565449  629988 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 13:37:57.567870  629988 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-170194 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-170194 --wait=true -v=7 --alsologtostderr
E0520 13:38:01.808068  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:41:04.854474  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:41:59.760773  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-170194 --wait=true -v=7 --alsologtostderr: (4m19.599468121s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-170194
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-170194 -n ha-170194
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-170194 logs -n 25: (1.907906756s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-170194 cp ha-170194-m03:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m02:/home/docker/cp-test_ha-170194-m03_ha-170194-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194-m02 sudo cat                                          | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m03_ha-170194-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m03:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04:/home/docker/cp-test_ha-170194-m03_ha-170194-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194-m04 sudo cat                                          | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m03_ha-170194-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-170194 cp testdata/cp-test.txt                                                | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1524472985/001/cp-test_ha-170194-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194:/home/docker/cp-test_ha-170194-m04_ha-170194.txt                       |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194 sudo cat                                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m04_ha-170194.txt                                 |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m02:/home/docker/cp-test_ha-170194-m04_ha-170194-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194-m02 sudo cat                                          | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m04_ha-170194-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m03:/home/docker/cp-test_ha-170194-m04_ha-170194-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194-m03 sudo cat                                          | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m04_ha-170194-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-170194 node stop m02 -v=7                                                     | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-170194 node start m02 -v=7                                                    | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-170194 -v=7                                                           | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-170194 -v=7                                                                | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-170194 --wait=true -v=7                                                    | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:37 UTC | 20 May 24 13:42 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-170194                                                                | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:42 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 13:37:57
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 13:37:57.616877  630458 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:37:57.617122  630458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:37:57.617131  630458 out.go:304] Setting ErrFile to fd 2...
	I0520 13:37:57.617135  630458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:37:57.617344  630458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:37:57.617880  630458 out.go:298] Setting JSON to false
	I0520 13:37:57.618851  630458 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":12018,"bootTime":1716200260,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 13:37:57.618914  630458 start.go:139] virtualization: kvm guest
	I0520 13:37:57.622275  630458 out.go:177] * [ha-170194] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 13:37:57.624803  630458 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 13:37:57.624772  630458 notify.go:220] Checking for updates...
	I0520 13:37:57.627055  630458 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 13:37:57.629285  630458 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 13:37:57.631572  630458 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:37:57.633653  630458 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 13:37:57.635704  630458 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 13:37:57.638347  630458 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:37:57.638485  630458 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 13:37:57.639000  630458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:37:57.639091  630458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:37:57.654493  630458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35663
	I0520 13:37:57.654930  630458 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:37:57.655509  630458 main.go:141] libmachine: Using API Version  1
	I0520 13:37:57.655537  630458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:37:57.655941  630458 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:37:57.656160  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:37:57.693674  630458 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 13:37:57.695820  630458 start.go:297] selected driver: kvm2
	I0520 13:37:57.695844  630458 start.go:901] validating driver "kvm2" against &{Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.163 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:37:57.696007  630458 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 13:37:57.696378  630458 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:37:57.696480  630458 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 13:37:57.712550  630458 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 13:37:57.713202  630458 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 13:37:57.713275  630458 cni.go:84] Creating CNI manager for ""
	I0520 13:37:57.713288  630458 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0520 13:37:57.713339  630458 start.go:340] cluster config:
	{Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.163 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:37:57.713484  630458 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:37:57.716212  630458 out.go:177] * Starting "ha-170194" primary control-plane node in "ha-170194" cluster
	I0520 13:37:57.718243  630458 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:37:57.718292  630458 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 13:37:57.718303  630458 cache.go:56] Caching tarball of preloaded images
	I0520 13:37:57.718408  630458 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:37:57.718421  630458 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 13:37:57.718537  630458 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:37:57.718772  630458 start.go:360] acquireMachinesLock for ha-170194: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:37:57.718823  630458 start.go:364] duration metric: took 30.908µs to acquireMachinesLock for "ha-170194"
	I0520 13:37:57.718842  630458 start.go:96] Skipping create...Using existing machine configuration
	I0520 13:37:57.718856  630458 fix.go:54] fixHost starting: 
	I0520 13:37:57.719153  630458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:37:57.719194  630458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:37:57.733611  630458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44371
	I0520 13:37:57.734021  630458 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:37:57.734497  630458 main.go:141] libmachine: Using API Version  1
	I0520 13:37:57.734519  630458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:37:57.734897  630458 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:37:57.735110  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:37:57.735290  630458 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:37:57.736883  630458 fix.go:112] recreateIfNeeded on ha-170194: state=Running err=<nil>
	W0520 13:37:57.736908  630458 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 13:37:57.740905  630458 out.go:177] * Updating the running kvm2 "ha-170194" VM ...
	I0520 13:37:57.743151  630458 machine.go:94] provisionDockerMachine start ...
	I0520 13:37:57.743184  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:37:57.743445  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:37:57.746170  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:57.746596  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:37:57.746622  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:57.746803  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:37:57.746999  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:57.747190  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:57.747336  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:37:57.747535  630458 main.go:141] libmachine: Using SSH client type: native
	I0520 13:37:57.747708  630458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:37:57.747719  630458 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 13:37:57.850683  630458 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-170194
	
	I0520 13:37:57.850729  630458 main.go:141] libmachine: (ha-170194) Calling .GetMachineName
	I0520 13:37:57.850966  630458 buildroot.go:166] provisioning hostname "ha-170194"
	I0520 13:37:57.850985  630458 main.go:141] libmachine: (ha-170194) Calling .GetMachineName
	I0520 13:37:57.851243  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:37:57.854177  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:57.854630  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:37:57.854657  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:57.854840  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:37:57.855050  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:57.855206  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:57.855347  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:37:57.855516  630458 main.go:141] libmachine: Using SSH client type: native
	I0520 13:37:57.855673  630458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:37:57.855686  630458 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-170194 && echo "ha-170194" | sudo tee /etc/hostname
	I0520 13:37:57.973785  630458 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-170194
	
	I0520 13:37:57.973833  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:37:57.976774  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:57.977261  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:37:57.977290  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:57.977566  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:37:57.977808  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:57.977987  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:57.978158  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:37:57.978347  630458 main.go:141] libmachine: Using SSH client type: native
	I0520 13:37:57.978501  630458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:37:57.978516  630458 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-170194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-170194/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-170194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 13:37:58.078666  630458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:37:58.078696  630458 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 13:37:58.078723  630458 buildroot.go:174] setting up certificates
	I0520 13:37:58.078731  630458 provision.go:84] configureAuth start
	I0520 13:37:58.078743  630458 main.go:141] libmachine: (ha-170194) Calling .GetMachineName
	I0520 13:37:58.079084  630458 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:37:58.082173  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:58.082649  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:37:58.082683  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:58.082825  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:37:58.084988  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:58.085442  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:37:58.085470  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:58.085590  630458 provision.go:143] copyHostCerts
	I0520 13:37:58.085618  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:37:58.085652  630458 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 13:37:58.085671  630458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:37:58.085730  630458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 13:37:58.086111  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:37:58.086155  630458 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 13:37:58.086168  630458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:37:58.086218  630458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 13:37:58.086299  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:37:58.086327  630458 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 13:37:58.086338  630458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:37:58.086376  630458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 13:37:58.086452  630458 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.ha-170194 san=[127.0.0.1 192.168.39.92 ha-170194 localhost minikube]
	I0520 13:37:58.316882  630458 provision.go:177] copyRemoteCerts
	I0520 13:37:58.316959  630458 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 13:37:58.316988  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:37:58.319920  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:58.320406  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:37:58.320442  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:58.320592  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:37:58.320811  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:58.320987  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:37:58.321186  630458 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:37:58.399581  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 13:37:58.399682  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 13:37:58.422656  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 13:37:58.422736  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0520 13:37:58.445502  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 13:37:58.445580  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 13:37:58.468669  630458 provision.go:87] duration metric: took 389.92062ms to configureAuth
	I0520 13:37:58.468709  630458 buildroot.go:189] setting minikube options for container-runtime
	I0520 13:37:58.468932  630458 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:37:58.469026  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:37:58.471766  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:58.472102  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:37:58.472131  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:58.472314  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:37:58.472523  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:58.472691  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:58.472846  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:37:58.473023  630458 main.go:141] libmachine: Using SSH client type: native
	I0520 13:37:58.473186  630458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:37:58.473201  630458 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 13:39:29.348044  630458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 13:39:29.348083  630458 machine.go:97] duration metric: took 1m31.604906844s to provisionDockerMachine
	I0520 13:39:29.348095  630458 start.go:293] postStartSetup for "ha-170194" (driver="kvm2")
	I0520 13:39:29.348107  630458 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 13:39:29.348125  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:39:29.348528  630458 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 13:39:29.348563  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:39:29.351804  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.352328  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:39:29.352351  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.352634  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:39:29.352863  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:39:29.353084  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:39:29.353269  630458 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:39:29.433942  630458 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 13:39:29.438165  630458 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 13:39:29.438195  630458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 13:39:29.438263  630458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 13:39:29.438375  630458 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 13:39:29.438393  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /etc/ssl/certs/6098672.pem
	I0520 13:39:29.438506  630458 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 13:39:29.448050  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:39:29.475590  630458 start.go:296] duration metric: took 127.478491ms for postStartSetup
	I0520 13:39:29.475640  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:39:29.476001  630458 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0520 13:39:29.476038  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:39:29.478860  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.479313  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:39:29.479343  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.479492  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:39:29.479671  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:39:29.479826  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:39:29.480013  630458 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	W0520 13:39:29.558948  630458 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0520 13:39:29.558985  630458 fix.go:56] duration metric: took 1m31.840129922s for fixHost
	I0520 13:39:29.559041  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:39:29.561848  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.562369  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:39:29.562402  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.562525  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:39:29.562725  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:39:29.562882  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:39:29.563015  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:39:29.563186  630458 main.go:141] libmachine: Using SSH client type: native
	I0520 13:39:29.563466  630458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:39:29.563486  630458 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 13:39:29.666072  630458 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716212369.648510714
	
	I0520 13:39:29.666102  630458 fix.go:216] guest clock: 1716212369.648510714
	I0520 13:39:29.666109  630458 fix.go:229] Guest: 2024-05-20 13:39:29.648510714 +0000 UTC Remote: 2024-05-20 13:39:29.558995033 +0000 UTC m=+91.979086619 (delta=89.515681ms)
	I0520 13:39:29.666133  630458 fix.go:200] guest clock delta is within tolerance: 89.515681ms
	I0520 13:39:29.666138  630458 start.go:83] releasing machines lock for "ha-170194", held for 1m31.947306351s
	I0520 13:39:29.666167  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:39:29.666487  630458 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:39:29.669259  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.669659  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:39:29.669703  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.669843  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:39:29.670375  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:39:29.670572  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:39:29.670658  630458 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 13:39:29.670711  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:39:29.670775  630458 ssh_runner.go:195] Run: cat /version.json
	I0520 13:39:29.670792  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:39:29.673471  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.673804  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:39:29.673830  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.674007  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:39:29.674107  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.674274  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:39:29.674444  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:39:29.674560  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:39:29.674584  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.674610  630458 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:39:29.674771  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:39:29.674982  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:39:29.675135  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:39:29.675297  630458 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	W0520 13:39:29.790695  630458 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 13:39:29.790829  630458 ssh_runner.go:195] Run: systemctl --version
	I0520 13:39:29.796935  630458 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 13:39:29.956452  630458 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 13:39:29.962243  630458 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 13:39:29.962307  630458 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 13:39:29.970973  630458 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 13:39:29.970997  630458 start.go:494] detecting cgroup driver to use...
	I0520 13:39:29.971070  630458 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 13:39:29.986711  630458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 13:39:30.000020  630458 docker.go:217] disabling cri-docker service (if available) ...
	I0520 13:39:30.000091  630458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 13:39:30.013978  630458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 13:39:30.027709  630458 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 13:39:30.176696  630458 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 13:39:30.316964  630458 docker.go:233] disabling docker service ...
	I0520 13:39:30.317055  630458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 13:39:30.332119  630458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 13:39:30.345096  630458 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 13:39:30.485989  630458 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 13:39:30.629891  630458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 13:39:30.643632  630458 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 13:39:30.661385  630458 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 13:39:30.661444  630458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:39:30.671977  630458 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 13:39:30.672044  630458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:39:30.681566  630458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:39:30.691099  630458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:39:30.700718  630458 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 13:39:30.710318  630458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:39:30.719690  630458 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:39:30.729989  630458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:39:30.739409  630458 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 13:39:30.747863  630458 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 13:39:30.756091  630458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:39:30.901860  630458 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 13:39:31.802038  630458 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 13:39:31.802131  630458 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 13:39:31.807352  630458 start.go:562] Will wait 60s for crictl version
	I0520 13:39:31.807412  630458 ssh_runner.go:195] Run: which crictl
	I0520 13:39:31.811056  630458 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 13:39:31.846648  630458 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 13:39:31.846727  630458 ssh_runner.go:195] Run: crio --version
	I0520 13:39:31.878241  630458 ssh_runner.go:195] Run: crio --version
	I0520 13:39:31.909741  630458 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 13:39:31.911825  630458 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:39:31.914716  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:31.915062  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:39:31.915093  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:31.915291  630458 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 13:39:31.919887  630458 kubeadm.go:877] updating cluster {Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.163 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 13:39:31.920027  630458 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:39:31.920086  630458 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:39:31.964377  630458 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:39:31.964409  630458 crio.go:433] Images already preloaded, skipping extraction
	I0520 13:39:31.964475  630458 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:39:31.998399  630458 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:39:31.998424  630458 cache_images.go:84] Images are preloaded, skipping loading
	I0520 13:39:31.998433  630458 kubeadm.go:928] updating node { 192.168.39.92 8443 v1.30.1 crio true true} ...
	I0520 13:39:31.998544  630458 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-170194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:39:31.998610  630458 ssh_runner.go:195] Run: crio config
	I0520 13:39:32.043875  630458 cni.go:84] Creating CNI manager for ""
	I0520 13:39:32.043895  630458 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0520 13:39:32.043916  630458 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 13:39:32.043963  630458 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-170194 NodeName:ha-170194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 13:39:32.044130  630458 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-170194"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 13:39:32.044148  630458 kube-vip.go:115] generating kube-vip config ...
	I0520 13:39:32.044203  630458 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 13:39:32.055659  630458 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 13:39:32.055771  630458 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 13:39:32.055832  630458 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 13:39:32.065156  630458 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 13:39:32.065221  630458 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0520 13:39:32.075466  630458 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0520 13:39:32.091771  630458 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:39:32.108200  630458 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0520 13:39:32.123784  630458 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 13:39:32.141422  630458 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 13:39:32.145307  630458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:39:32.293077  630458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:39:32.308010  630458 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194 for IP: 192.168.39.92
	I0520 13:39:32.308036  630458 certs.go:194] generating shared ca certs ...
	I0520 13:39:32.308052  630458 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:39:32.308225  630458 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 13:39:32.308281  630458 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 13:39:32.308295  630458 certs.go:256] generating profile certs ...
	I0520 13:39:32.308389  630458 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key
	I0520 13:39:32.308426  630458 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.a5e60bbd
	I0520 13:39:32.308448  630458 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.a5e60bbd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.92 192.168.39.155 192.168.39.3 192.168.39.254]
	I0520 13:39:32.682527  630458 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.a5e60bbd ...
	I0520 13:39:32.682563  630458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.a5e60bbd: {Name:mkfa69fc36ddc2d1a2a6de520d370ba30be7c53f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:39:32.682830  630458 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.a5e60bbd ...
	I0520 13:39:32.682854  630458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.a5e60bbd: {Name:mkd7bcdab272b5fb0c2e8cb0a77afbc9d037a96c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:39:32.682982  630458 certs.go:381] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.a5e60bbd -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt
	I0520 13:39:32.683295  630458 certs.go:385] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.a5e60bbd -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key
	I0520 13:39:32.683494  630458 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key
	I0520 13:39:32.683515  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 13:39:32.683533  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 13:39:32.683553  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 13:39:32.683571  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 13:39:32.683587  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 13:39:32.683602  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 13:39:32.683617  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 13:39:32.683635  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 13:39:32.683702  630458 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem (1338 bytes)
	W0520 13:39:32.683748  630458 certs.go:480] ignoring /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867_empty.pem, impossibly tiny 0 bytes
	I0520 13:39:32.683761  630458 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 13:39:32.683843  630458 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 13:39:32.683886  630458 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:39:32.683939  630458 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 13:39:32.683998  630458 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:39:32.684038  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /usr/share/ca-certificates/6098672.pem
	I0520 13:39:32.684058  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:39:32.684074  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem -> /usr/share/ca-certificates/609867.pem
	I0520 13:39:32.685366  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:39:32.712559  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:39:32.736326  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:39:32.760562  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 13:39:32.783953  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0520 13:39:32.806831  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 13:39:32.831120  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:39:32.856018  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 13:39:32.878644  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /usr/share/ca-certificates/6098672.pem (1708 bytes)
	I0520 13:39:32.901558  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:39:32.924050  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem --> /usr/share/ca-certificates/609867.pem (1338 bytes)
	I0520 13:39:32.946968  630458 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 13:39:32.962701  630458 ssh_runner.go:195] Run: openssl version
	I0520 13:39:32.968510  630458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6098672.pem && ln -fs /usr/share/ca-certificates/6098672.pem /etc/ssl/certs/6098672.pem"
	I0520 13:39:32.978524  630458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6098672.pem
	I0520 13:39:32.982526  630458 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 13:39:32.982574  630458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6098672.pem
	I0520 13:39:32.987882  630458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6098672.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:39:32.996865  630458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:39:33.007028  630458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:39:33.011176  630458 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:39:33.011249  630458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:39:33.016863  630458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:39:33.025859  630458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/609867.pem && ln -fs /usr/share/ca-certificates/609867.pem /etc/ssl/certs/609867.pem"
	I0520 13:39:33.036148  630458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/609867.pem
	I0520 13:39:33.040434  630458 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 13:39:33.040489  630458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/609867.pem
	I0520 13:39:33.046056  630458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/609867.pem /etc/ssl/certs/51391683.0"
	I0520 13:39:33.055280  630458 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:39:33.059705  630458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 13:39:33.065228  630458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 13:39:33.070629  630458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 13:39:33.076117  630458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 13:39:33.081392  630458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 13:39:33.086659  630458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 13:39:33.091783  630458 kubeadm.go:391] StartCluster: {Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.163 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:39:33.091891  630458 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 13:39:33.091950  630458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 13:39:33.129426  630458 cri.go:89] found id: "288c25b639238858678ff15231bd6cb6719c8101c561b37043af0511d4979d50"
	I0520 13:39:33.129454  630458 cri.go:89] found id: "6e288ee54e1532f81c3d2587067144f5d8d0382f8be5e992c6b6bd7c9cc1de98"
	I0520 13:39:33.129457  630458 cri.go:89] found id: "aec5b752545e8d9abd4d44817bed499e6bef842a475cd12e2a3dee7cadd5e0dc"
	I0520 13:39:33.129460  630458 cri.go:89] found id: "20ef4886be391f5f00d7681fc4012bf67995bc8ecf4e1fae3a30b9cf6ad18f37"
	I0520 13:39:33.129462  630458 cri.go:89] found id: "9ea85179fd0503aa9f2a864a7e621c5f12560217b135dee98c2f4cea9a4e5e59"
	I0520 13:39:33.129466  630458 cri.go:89] found id: "d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4"
	I0520 13:39:33.129468  630458 cri.go:89] found id: "6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583"
	I0520 13:39:33.129470  630458 cri.go:89] found id: "ef86504a6a21868d78111ee02e2dfffac4c0417e767d4026c693a1536b8b19d2"
	I0520 13:39:33.129473  630458 cri.go:89] found id: "2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b"
	I0520 13:39:33.129480  630458 cri.go:89] found id: "334824a1ffd8bfbb3cb38a60c62bafa0a027c3c9f2d464b2aece3440327048f5"
	I0520 13:39:33.129482  630458 cri.go:89] found id: "e40d2be6b414dbb4aff49d9c9270179b8ef07ff0fd8215c35a69afa89c8b9a23"
	I0520 13:39:33.129485  630458 cri.go:89] found id: "bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2"
	I0520 13:39:33.129487  630458 cri.go:89] found id: "d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8"
	I0520 13:39:33.129490  630458 cri.go:89] found id: "b0dc1542ea21a6985bbc8673bdd0d29ddd6774c864aa059ea5d9aa8552ed47fa"
	I0520 13:39:33.129494  630458 cri.go:89] found id: ""
	I0520 13:39:33.129545  630458 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 20 13:42:17 ha-170194 crio[3762]: time="2024-05-20 13:42:17.938209197Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716212537938172896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0af130e3-6ed6-46fc-8219-ff018fbde9bd name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:42:17 ha-170194 crio[3762]: time="2024-05-20 13:42:17.938839694Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e06534f-8bdc-4352-8caa-041e694792fc name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:42:17 ha-170194 crio[3762]: time="2024-05-20 13:42:17.938955900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e06534f-8bdc-4352-8caa-041e694792fc name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:42:17 ha-170194 crio[3762]: time="2024-05-20 13:42:17.939492063Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45c16067478b03d8b1129f03123a30e9e5f4cffa48fd0f6d8d66a55129c1b931,PodSandboxId:42415cf90d345b4c5dc3f830da6a839d4fe2a1aaa512daa1c502180f3ab3f6c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716212434295133871,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a34f8e509820a2b68edc25834eedaf4e5c409243d83aad4ec3ff3bbcc686713,PodSandboxId:6617e85a79be145af1b61be50f18db7a10820a004e3e4aac973dd59152709627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716212421269583785,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7785a144a0e759050cfe76f22440222e2f754cdf34117eb8599199c8ccb715d,PodSandboxId:4b2e01ae01a57f7f5014f51fcf29fbbc1af5773ecfdf9c5069cce3247f592d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716212417262746135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87196f06f4196e74137e462f5db1f7978fa34308ac81955b53ec21931c310ed9,PodSandboxId:0376397fdcb9af924b0ef345b957589fa7d38c5d8a91887203ad0fbab61bc0e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716212410642562490,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kubernetes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b195802788ab566858825c71059df3150db2401c9d5946c65949cdb2c9e71a42,PodSandboxId:6118c63443924c19612b1f19f1f5d0a31f467cbc5929b1f23ce7c3715f8bc541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716212409687160547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87fa4ca39ac2878206e0c3551663210114f86ef96fa4b960d288bfc2027a359e,PodSandboxId:d1314f832bf2703533be40178d4c918e9f4edfbe5a568c1134a28a96ee26e919,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716212393172309394,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36b1022881be70389907328afc31c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3df09e40b72acd0754deff4f383a3f3ea5658277934b59145e3842be2182ce97,PodSandboxId:c0b19b98fe1f2f1d27ec08377df3877d3f19f110fbccf473850ead651aff623e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716212377684956955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:3ad4de192a107b941a2ab04025b0ddb05103fa49eeb708dc985d66b444e4ed86,PodSandboxId:6617e85a79be145af1b61be50f18db7a10820a004e3e4aac973dd59152709627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716212377489696321,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6720f2ab
5ded75d5497c5f267a5e6ce23c07eaab7d1c37a699eff5e21a187d99,PodSandboxId:42415cf90d345b4c5dc3f830da6a839d4fe2a1aaa512daa1c502180f3ab3f6c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716212377336204646,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c64aa00ef2d64c3757c1d4fc0eeb94d8397d4a8fc0
11a7aeccb9edb1566b91b,PodSandboxId:3bb9df5ac1214f20dffbcd948ee4e3ff41da52a2ad3e75645d2f7f9e96f02d80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716212377245682095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:749db6e85ef414586fab40386b9a1d55edbbf2139eee1a9abbd8cdd389ce28f9
,PodSandboxId:49b4ed5fa5d93c1354b6af5fe16b198d680298c826bfdbf62001678e3cf5f847,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716212377239099221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-7969-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2418473c6764f6381282842c571770e8b7d6ccd91a111236466f027ac5c3439,PodSandboxId:e74fd0af7a898956b42b42d530c2e127235247b873ce9361b3b1cc6afa020d48,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716212377159023879,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca3e13e017f2fa2c5625fcea967f30ed81a1fb98e926f9322dea1d8c9cf887e8,PodSandboxId:4b2e01ae01a57f7f5014f51fcf29fbbc1af5773ecfdf9c5069cce3247f592d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716212377155045792,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b
0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49de9de51e3a448dc283be641c93a8ce95a1a415fb7865e3759421c56fefddd4,PodSandboxId:6118c63443924c19612b1f19f1f5d0a31f467cbc5929b1f23ce7c3715f8bc541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716212376992412017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495
ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:718c55ec406ad49e98fb24805d401e94b9e216b4eb4a9377f977f2aea7059028,PodSandboxId:254b523bd071201d4d40cb0f8b3c4ed66d3399fcafb509759840a97acca1fb7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716212377037481888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kuber
netes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf740d9b5f06de9c1c7f18ff96ae4e526c6be4ceefd003d80cbf78d2a4a8de2e,PodSandboxId:85c1015ea36da8c46f104ff7ec479b35768c9d5a0cfc6b141e2355692dadb1b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716211886372304954,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kuberne
tes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4,PodSandboxId:cb6f21c242e20885a848e6e54b17b052b76485170e8d42e9306861f8b60b9773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211729633877124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583,PodSandboxId:901f35680bee52a4fdcd71f44cbdd95bb19dfa886f3ea6b532d85fee30bffbe9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211729610237371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-7969-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b,PodSandboxId:ef9cc40406ad74740df8eabc268ef219b5ddbbd891b2bbcb11b6cb63e0559867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211727429893188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2,PodSandboxId:0a5e941c6740dcbed3cd2281373ada1e41c0ae9be65f6fcbdf489717a76cdcf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211708079043565,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8,PodSandboxId:1a02a71cebea30837cb4dfa8906ebe6d049ffc18862ce9371a6c5f6de47e2edf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1716211708025464643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e06534f-8bdc-4352-8caa-041e694792fc name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:42:17 ha-170194 crio[3762]: time="2024-05-20 13:42:17.998226336Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c3953d8-3381-4f25-ad7a-5fedd8e370b5 name=/runtime.v1.RuntimeService/Version
	May 20 13:42:17 ha-170194 crio[3762]: time="2024-05-20 13:42:17.998304503Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c3953d8-3381-4f25-ad7a-5fedd8e370b5 name=/runtime.v1.RuntimeService/Version
	May 20 13:42:17 ha-170194 crio[3762]: time="2024-05-20 13:42:17.999658551Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b86ffaf3-c81c-4fc5-9b69-3f3fb7ec2e10 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.000259959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716212538000185046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b86ffaf3-c81c-4fc5-9b69-3f3fb7ec2e10 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.000730127Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33d5b1e4-6750-4dcf-9f19-b162205e29f6 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.000798938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33d5b1e4-6750-4dcf-9f19-b162205e29f6 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.001365642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45c16067478b03d8b1129f03123a30e9e5f4cffa48fd0f6d8d66a55129c1b931,PodSandboxId:42415cf90d345b4c5dc3f830da6a839d4fe2a1aaa512daa1c502180f3ab3f6c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716212434295133871,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a34f8e509820a2b68edc25834eedaf4e5c409243d83aad4ec3ff3bbcc686713,PodSandboxId:6617e85a79be145af1b61be50f18db7a10820a004e3e4aac973dd59152709627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716212421269583785,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7785a144a0e759050cfe76f22440222e2f754cdf34117eb8599199c8ccb715d,PodSandboxId:4b2e01ae01a57f7f5014f51fcf29fbbc1af5773ecfdf9c5069cce3247f592d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716212417262746135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87196f06f4196e74137e462f5db1f7978fa34308ac81955b53ec21931c310ed9,PodSandboxId:0376397fdcb9af924b0ef345b957589fa7d38c5d8a91887203ad0fbab61bc0e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716212410642562490,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kubernetes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b195802788ab566858825c71059df3150db2401c9d5946c65949cdb2c9e71a42,PodSandboxId:6118c63443924c19612b1f19f1f5d0a31f467cbc5929b1f23ce7c3715f8bc541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716212409687160547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87fa4ca39ac2878206e0c3551663210114f86ef96fa4b960d288bfc2027a359e,PodSandboxId:d1314f832bf2703533be40178d4c918e9f4edfbe5a568c1134a28a96ee26e919,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716212393172309394,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36b1022881be70389907328afc31c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3df09e40b72acd0754deff4f383a3f3ea5658277934b59145e3842be2182ce97,PodSandboxId:c0b19b98fe1f2f1d27ec08377df3877d3f19f110fbccf473850ead651aff623e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716212377684956955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:3ad4de192a107b941a2ab04025b0ddb05103fa49eeb708dc985d66b444e4ed86,PodSandboxId:6617e85a79be145af1b61be50f18db7a10820a004e3e4aac973dd59152709627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716212377489696321,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6720f2ab
5ded75d5497c5f267a5e6ce23c07eaab7d1c37a699eff5e21a187d99,PodSandboxId:42415cf90d345b4c5dc3f830da6a839d4fe2a1aaa512daa1c502180f3ab3f6c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716212377336204646,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c64aa00ef2d64c3757c1d4fc0eeb94d8397d4a8fc0
11a7aeccb9edb1566b91b,PodSandboxId:3bb9df5ac1214f20dffbcd948ee4e3ff41da52a2ad3e75645d2f7f9e96f02d80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716212377245682095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:749db6e85ef414586fab40386b9a1d55edbbf2139eee1a9abbd8cdd389ce28f9
,PodSandboxId:49b4ed5fa5d93c1354b6af5fe16b198d680298c826bfdbf62001678e3cf5f847,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716212377239099221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-7969-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2418473c6764f6381282842c571770e8b7d6ccd91a111236466f027ac5c3439,PodSandboxId:e74fd0af7a898956b42b42d530c2e127235247b873ce9361b3b1cc6afa020d48,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716212377159023879,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca3e13e017f2fa2c5625fcea967f30ed81a1fb98e926f9322dea1d8c9cf887e8,PodSandboxId:4b2e01ae01a57f7f5014f51fcf29fbbc1af5773ecfdf9c5069cce3247f592d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716212377155045792,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b
0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49de9de51e3a448dc283be641c93a8ce95a1a415fb7865e3759421c56fefddd4,PodSandboxId:6118c63443924c19612b1f19f1f5d0a31f467cbc5929b1f23ce7c3715f8bc541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716212376992412017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495
ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:718c55ec406ad49e98fb24805d401e94b9e216b4eb4a9377f977f2aea7059028,PodSandboxId:254b523bd071201d4d40cb0f8b3c4ed66d3399fcafb509759840a97acca1fb7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716212377037481888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kuber
netes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf740d9b5f06de9c1c7f18ff96ae4e526c6be4ceefd003d80cbf78d2a4a8de2e,PodSandboxId:85c1015ea36da8c46f104ff7ec479b35768c9d5a0cfc6b141e2355692dadb1b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716211886372304954,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kuberne
tes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4,PodSandboxId:cb6f21c242e20885a848e6e54b17b052b76485170e8d42e9306861f8b60b9773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211729633877124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583,PodSandboxId:901f35680bee52a4fdcd71f44cbdd95bb19dfa886f3ea6b532d85fee30bffbe9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211729610237371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-7969-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b,PodSandboxId:ef9cc40406ad74740df8eabc268ef219b5ddbbd891b2bbcb11b6cb63e0559867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211727429893188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2,PodSandboxId:0a5e941c6740dcbed3cd2281373ada1e41c0ae9be65f6fcbdf489717a76cdcf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211708079043565,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8,PodSandboxId:1a02a71cebea30837cb4dfa8906ebe6d049ffc18862ce9371a6c5f6de47e2edf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1716211708025464643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33d5b1e4-6750-4dcf-9f19-b162205e29f6 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.050876053Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e02dbe76-d1ba-427a-8863-6eb21c9b090a name=/runtime.v1.RuntimeService/Version
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.051060792Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e02dbe76-d1ba-427a-8863-6eb21c9b090a name=/runtime.v1.RuntimeService/Version
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.052490688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20dd625f-70e6-40be-b05b-54cb63cdc63f name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.052958122Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716212538052883659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20dd625f-70e6-40be-b05b-54cb63cdc63f name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.053727382Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3d1131a-c722-46a1-81d5-2b20b565ee90 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.053787892Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3d1131a-c722-46a1-81d5-2b20b565ee90 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.054334538Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45c16067478b03d8b1129f03123a30e9e5f4cffa48fd0f6d8d66a55129c1b931,PodSandboxId:42415cf90d345b4c5dc3f830da6a839d4fe2a1aaa512daa1c502180f3ab3f6c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716212434295133871,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a34f8e509820a2b68edc25834eedaf4e5c409243d83aad4ec3ff3bbcc686713,PodSandboxId:6617e85a79be145af1b61be50f18db7a10820a004e3e4aac973dd59152709627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716212421269583785,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7785a144a0e759050cfe76f22440222e2f754cdf34117eb8599199c8ccb715d,PodSandboxId:4b2e01ae01a57f7f5014f51fcf29fbbc1af5773ecfdf9c5069cce3247f592d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716212417262746135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87196f06f4196e74137e462f5db1f7978fa34308ac81955b53ec21931c310ed9,PodSandboxId:0376397fdcb9af924b0ef345b957589fa7d38c5d8a91887203ad0fbab61bc0e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716212410642562490,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kubernetes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b195802788ab566858825c71059df3150db2401c9d5946c65949cdb2c9e71a42,PodSandboxId:6118c63443924c19612b1f19f1f5d0a31f467cbc5929b1f23ce7c3715f8bc541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716212409687160547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87fa4ca39ac2878206e0c3551663210114f86ef96fa4b960d288bfc2027a359e,PodSandboxId:d1314f832bf2703533be40178d4c918e9f4edfbe5a568c1134a28a96ee26e919,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716212393172309394,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36b1022881be70389907328afc31c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3df09e40b72acd0754deff4f383a3f3ea5658277934b59145e3842be2182ce97,PodSandboxId:c0b19b98fe1f2f1d27ec08377df3877d3f19f110fbccf473850ead651aff623e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716212377684956955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:3ad4de192a107b941a2ab04025b0ddb05103fa49eeb708dc985d66b444e4ed86,PodSandboxId:6617e85a79be145af1b61be50f18db7a10820a004e3e4aac973dd59152709627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716212377489696321,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6720f2ab
5ded75d5497c5f267a5e6ce23c07eaab7d1c37a699eff5e21a187d99,PodSandboxId:42415cf90d345b4c5dc3f830da6a839d4fe2a1aaa512daa1c502180f3ab3f6c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716212377336204646,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c64aa00ef2d64c3757c1d4fc0eeb94d8397d4a8fc0
11a7aeccb9edb1566b91b,PodSandboxId:3bb9df5ac1214f20dffbcd948ee4e3ff41da52a2ad3e75645d2f7f9e96f02d80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716212377245682095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:749db6e85ef414586fab40386b9a1d55edbbf2139eee1a9abbd8cdd389ce28f9
,PodSandboxId:49b4ed5fa5d93c1354b6af5fe16b198d680298c826bfdbf62001678e3cf5f847,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716212377239099221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-7969-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2418473c6764f6381282842c571770e8b7d6ccd91a111236466f027ac5c3439,PodSandboxId:e74fd0af7a898956b42b42d530c2e127235247b873ce9361b3b1cc6afa020d48,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716212377159023879,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca3e13e017f2fa2c5625fcea967f30ed81a1fb98e926f9322dea1d8c9cf887e8,PodSandboxId:4b2e01ae01a57f7f5014f51fcf29fbbc1af5773ecfdf9c5069cce3247f592d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716212377155045792,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b
0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49de9de51e3a448dc283be641c93a8ce95a1a415fb7865e3759421c56fefddd4,PodSandboxId:6118c63443924c19612b1f19f1f5d0a31f467cbc5929b1f23ce7c3715f8bc541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716212376992412017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495
ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:718c55ec406ad49e98fb24805d401e94b9e216b4eb4a9377f977f2aea7059028,PodSandboxId:254b523bd071201d4d40cb0f8b3c4ed66d3399fcafb509759840a97acca1fb7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716212377037481888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kuber
netes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf740d9b5f06de9c1c7f18ff96ae4e526c6be4ceefd003d80cbf78d2a4a8de2e,PodSandboxId:85c1015ea36da8c46f104ff7ec479b35768c9d5a0cfc6b141e2355692dadb1b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716211886372304954,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kuberne
tes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4,PodSandboxId:cb6f21c242e20885a848e6e54b17b052b76485170e8d42e9306861f8b60b9773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211729633877124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583,PodSandboxId:901f35680bee52a4fdcd71f44cbdd95bb19dfa886f3ea6b532d85fee30bffbe9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211729610237371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-7969-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b,PodSandboxId:ef9cc40406ad74740df8eabc268ef219b5ddbbd891b2bbcb11b6cb63e0559867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211727429893188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2,PodSandboxId:0a5e941c6740dcbed3cd2281373ada1e41c0ae9be65f6fcbdf489717a76cdcf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211708079043565,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8,PodSandboxId:1a02a71cebea30837cb4dfa8906ebe6d049ffc18862ce9371a6c5f6de47e2edf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1716211708025464643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3d1131a-c722-46a1-81d5-2b20b565ee90 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.097107286Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b012450-f830-499a-a029-59beab4fc22f name=/runtime.v1.RuntimeService/Version
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.097189403Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b012450-f830-499a-a029-59beab4fc22f name=/runtime.v1.RuntimeService/Version
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.098602928Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4b289a3-7fbc-4bef-8a93-a32e85b269b7 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.099136464Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716212538099108965,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4b289a3-7fbc-4bef-8a93-a32e85b269b7 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.099658265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d265e2aa-2b84-45c9-bd8e-7432dbdf2c58 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.099725339Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d265e2aa-2b84-45c9-bd8e-7432dbdf2c58 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:42:18 ha-170194 crio[3762]: time="2024-05-20 13:42:18.100188339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45c16067478b03d8b1129f03123a30e9e5f4cffa48fd0f6d8d66a55129c1b931,PodSandboxId:42415cf90d345b4c5dc3f830da6a839d4fe2a1aaa512daa1c502180f3ab3f6c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716212434295133871,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a34f8e509820a2b68edc25834eedaf4e5c409243d83aad4ec3ff3bbcc686713,PodSandboxId:6617e85a79be145af1b61be50f18db7a10820a004e3e4aac973dd59152709627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716212421269583785,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7785a144a0e759050cfe76f22440222e2f754cdf34117eb8599199c8ccb715d,PodSandboxId:4b2e01ae01a57f7f5014f51fcf29fbbc1af5773ecfdf9c5069cce3247f592d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716212417262746135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87196f06f4196e74137e462f5db1f7978fa34308ac81955b53ec21931c310ed9,PodSandboxId:0376397fdcb9af924b0ef345b957589fa7d38c5d8a91887203ad0fbab61bc0e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716212410642562490,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kubernetes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b195802788ab566858825c71059df3150db2401c9d5946c65949cdb2c9e71a42,PodSandboxId:6118c63443924c19612b1f19f1f5d0a31f467cbc5929b1f23ce7c3715f8bc541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716212409687160547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87fa4ca39ac2878206e0c3551663210114f86ef96fa4b960d288bfc2027a359e,PodSandboxId:d1314f832bf2703533be40178d4c918e9f4edfbe5a568c1134a28a96ee26e919,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716212393172309394,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36b1022881be70389907328afc31c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3df09e40b72acd0754deff4f383a3f3ea5658277934b59145e3842be2182ce97,PodSandboxId:c0b19b98fe1f2f1d27ec08377df3877d3f19f110fbccf473850ead651aff623e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716212377684956955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:3ad4de192a107b941a2ab04025b0ddb05103fa49eeb708dc985d66b444e4ed86,PodSandboxId:6617e85a79be145af1b61be50f18db7a10820a004e3e4aac973dd59152709627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716212377489696321,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6720f2ab
5ded75d5497c5f267a5e6ce23c07eaab7d1c37a699eff5e21a187d99,PodSandboxId:42415cf90d345b4c5dc3f830da6a839d4fe2a1aaa512daa1c502180f3ab3f6c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716212377336204646,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c64aa00ef2d64c3757c1d4fc0eeb94d8397d4a8fc0
11a7aeccb9edb1566b91b,PodSandboxId:3bb9df5ac1214f20dffbcd948ee4e3ff41da52a2ad3e75645d2f7f9e96f02d80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716212377245682095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:749db6e85ef414586fab40386b9a1d55edbbf2139eee1a9abbd8cdd389ce28f9
,PodSandboxId:49b4ed5fa5d93c1354b6af5fe16b198d680298c826bfdbf62001678e3cf5f847,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716212377239099221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-7969-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2418473c6764f6381282842c571770e8b7d6ccd91a111236466f027ac5c3439,PodSandboxId:e74fd0af7a898956b42b42d530c2e127235247b873ce9361b3b1cc6afa020d48,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716212377159023879,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca3e13e017f2fa2c5625fcea967f30ed81a1fb98e926f9322dea1d8c9cf887e8,PodSandboxId:4b2e01ae01a57f7f5014f51fcf29fbbc1af5773ecfdf9c5069cce3247f592d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716212377155045792,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b
0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49de9de51e3a448dc283be641c93a8ce95a1a415fb7865e3759421c56fefddd4,PodSandboxId:6118c63443924c19612b1f19f1f5d0a31f467cbc5929b1f23ce7c3715f8bc541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716212376992412017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495
ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:718c55ec406ad49e98fb24805d401e94b9e216b4eb4a9377f977f2aea7059028,PodSandboxId:254b523bd071201d4d40cb0f8b3c4ed66d3399fcafb509759840a97acca1fb7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716212377037481888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kuber
netes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf740d9b5f06de9c1c7f18ff96ae4e526c6be4ceefd003d80cbf78d2a4a8de2e,PodSandboxId:85c1015ea36da8c46f104ff7ec479b35768c9d5a0cfc6b141e2355692dadb1b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716211886372304954,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kuberne
tes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4,PodSandboxId:cb6f21c242e20885a848e6e54b17b052b76485170e8d42e9306861f8b60b9773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211729633877124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583,PodSandboxId:901f35680bee52a4fdcd71f44cbdd95bb19dfa886f3ea6b532d85fee30bffbe9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211729610237371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-7969-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b,PodSandboxId:ef9cc40406ad74740df8eabc268ef219b5ddbbd891b2bbcb11b6cb63e0559867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211727429893188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2,PodSandboxId:0a5e941c6740dcbed3cd2281373ada1e41c0ae9be65f6fcbdf489717a76cdcf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211708079043565,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8,PodSandboxId:1a02a71cebea30837cb4dfa8906ebe6d049ffc18862ce9371a6c5f6de47e2edf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1716211708025464643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d265e2aa-2b84-45c9-bd8e-7432dbdf2c58 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	45c16067478b0       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   42415cf90d345       kindnet-cmd8x
	7a34f8e509820       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       3                   6617e85a79be1       storage-provisioner
	f7785a144a0e7       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago        Running             kube-apiserver            3                   4b2e01ae01a57       kube-apiserver-ha-170194
	87196f06f4196       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   0376397fdcb9a       busybox-fc5497c4f-kn5pb
	b195802788ab5       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      2 minutes ago        Running             kube-controller-manager   2                   6118c63443924       kube-controller-manager-ha-170194
	87fa4ca39ac28       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   d1314f832bf27       kube-vip-ha-170194
	3df09e40b72ac       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      2 minutes ago        Running             kube-proxy                1                   c0b19b98fe1f2       kube-proxy-qth8f
	3ad4de192a107       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       2                   6617e85a79be1       storage-provisioner
	6720f2ab5ded7       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   42415cf90d345       kindnet-cmd8x
	5c64aa00ef2d6       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      2 minutes ago        Running             kube-scheduler            1                   3bb9df5ac1214       kube-scheduler-ha-170194
	749db6e85ef41       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   49b4ed5fa5d93       coredns-7db6d8ff4d-s28r6
	f2418473c6764       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   e74fd0af7a898       coredns-7db6d8ff4d-vk78q
	ca3e13e017f2f       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago        Exited              kube-apiserver            2                   4b2e01ae01a57       kube-apiserver-ha-170194
	718c55ec406ad       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   254b523bd0712       etcd-ha-170194
	49de9de51e3a4       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      2 minutes ago        Exited              kube-controller-manager   1                   6118c63443924       kube-controller-manager-ha-170194
	cf740d9b5f06d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   85c1015ea36da       busybox-fc5497c4f-kn5pb
	d3c1362d9012c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   cb6f21c242e20       coredns-7db6d8ff4d-vk78q
	6bd28e2e55305       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   901f35680bee5       coredns-7db6d8ff4d-s28r6
	2ca782f6be5aa       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      13 minutes ago       Exited              kube-proxy                0                   ef9cc40406ad7       kube-proxy-qth8f
	bd7f5eac64d8e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   0a5e941c6740d       etcd-ha-170194
	d125c402bd4cb       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      13 minutes ago       Exited              kube-scheduler            0                   1a02a71cebea3       kube-scheduler-ha-170194
	
	
	==> coredns [6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583] <==
	[INFO] 10.244.0.4:34499 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153438s
	[INFO] 10.244.0.4:47635 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003467859s
	[INFO] 10.244.0.4:37386 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211396s
	[INFO] 10.244.0.4:37274 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116452s
	[INFO] 10.244.1.2:33488 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156093s
	[INFO] 10.244.1.2:44452 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130005s
	[INFO] 10.244.2.2:54953 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000216728s
	[INFO] 10.244.2.2:41118 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098892s
	[INFO] 10.244.0.4:52970 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000086695s
	[INFO] 10.244.0.4:33272 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104087s
	[INFO] 10.244.0.4:47074 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061643s
	[INFO] 10.244.1.2:46181 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125314s
	[INFO] 10.244.1.2:60651 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114425s
	[INFO] 10.244.2.2:39831 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092598s
	[INFO] 10.244.2.2:36745 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009346s
	[INFO] 10.244.0.4:58943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126961s
	[INFO] 10.244.0.4:51569 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093816s
	[INFO] 10.244.0.4:33771 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095037s
	[INFO] 10.244.1.2:51959 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152608s
	[INFO] 10.244.2.2:41273 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085919s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [749db6e85ef414586fab40386b9a1d55edbbf2139eee1a9abbd8cdd389ce28f9] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:38032->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:38032->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4] <==
	[INFO] 10.244.1.2:39465 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205731s
	[INFO] 10.244.1.2:48674 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104027s
	[INFO] 10.244.1.2:42811 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001662979s
	[INFO] 10.244.1.2:55637 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155358s
	[INFO] 10.244.1.2:34282 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105391s
	[INFO] 10.244.2.2:55675 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129728s
	[INFO] 10.244.2.2:33579 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001845622s
	[INFO] 10.244.2.2:38991 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087704s
	[INFO] 10.244.2.2:60832 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001368991s
	[INFO] 10.244.2.2:49213 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064756s
	[INFO] 10.244.2.2:54664 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073817s
	[INFO] 10.244.0.4:58834 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096728s
	[INFO] 10.244.1.2:58412 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081949s
	[INFO] 10.244.1.2:52492 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085342s
	[INFO] 10.244.2.2:34598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011731s
	[INFO] 10.244.2.2:59375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131389s
	[INFO] 10.244.0.4:33373 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000185564s
	[INFO] 10.244.1.2:38899 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131605s
	[INFO] 10.244.1.2:39420 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000251117s
	[INFO] 10.244.1.2:39569 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000142225s
	[INFO] 10.244.2.2:33399 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185075s
	[INFO] 10.244.2.2:48490 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100278s
	[INFO] 10.244.2.2:35988 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115036s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f2418473c6764f6381282842c571770e8b7d6ccd91a111236466f027ac5c3439] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57904->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57904->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:57906->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:57906->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-170194
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170194
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=ha-170194
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T13_28_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:28:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170194
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:42:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:40:20 +0000   Mon, 20 May 2024 13:28:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:40:20 +0000   Mon, 20 May 2024 13:28:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:40:20 +0000   Mon, 20 May 2024 13:28:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:40:20 +0000   Mon, 20 May 2024 13:28:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-170194
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c0123e982bf4840b6eb6a3f175c7438
	  System UUID:                4c0123e9-82bf-4840-b6eb-6a3f175c7438
	  Boot ID:                    37123cd6-de29-4d66-9faf-c58bcb2e7628
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kn5pb              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-s28r6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-vk78q             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-170194                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-cmd8x                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-170194             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-170194    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-qth8f                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-170194             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-170194                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 13m    kube-proxy       
	  Normal   Starting                 115s   kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m    kubelet          Node ha-170194 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m    kubelet          Node ha-170194 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m    kubelet          Node ha-170194 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m    node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	  Normal   NodeReady                13m    kubelet          Node ha-170194 status is now: NodeReady
	  Normal   RegisteredNode           12m    node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	  Normal   RegisteredNode           10m    node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	  Warning  ContainerGCFailed        3m44s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           111s   node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	  Normal   RegisteredNode           107s   node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	  Normal   RegisteredNode           27s    node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	
	
	Name:               ha-170194-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170194-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=ha-170194
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T13_29_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:29:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170194-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:42:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:41:02 +0000   Mon, 20 May 2024 13:40:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:41:02 +0000   Mon, 20 May 2024 13:40:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:41:02 +0000   Mon, 20 May 2024 13:40:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:41:02 +0000   Mon, 20 May 2024 13:40:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.155
	  Hostname:    ha-170194-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dcdee518e92c4c0ba5f3ba763f746ea2
	  System UUID:                dcdee518-e92c-4c0b-a5f3-ba763f746ea2
	  Boot ID:                    f9827b30-252d-42f7-b6ca-2b6b5d85ff27
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tmq2s                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-170194-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-5mg44                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-170194-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-170194-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-7ncvb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-170194-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-170194-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 100s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-170194-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-170194-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-170194-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  NodeNotReady             9m19s                  node-controller  Node ha-170194-m02 status is now: NodeNotReady
	  Normal  Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m24s (x8 over 2m24s)  kubelet          Node ha-170194-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m24s (x8 over 2m24s)  kubelet          Node ha-170194-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m24s (x7 over 2m24s)  kubelet          Node ha-170194-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           111s                   node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  RegisteredNode           107s                   node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  RegisteredNode           27s                    node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	
	
	Name:               ha-170194-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170194-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=ha-170194
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T13_31_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:31:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170194-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:42:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:41:53 +0000   Mon, 20 May 2024 13:41:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:41:53 +0000   Mon, 20 May 2024 13:41:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:41:53 +0000   Mon, 20 May 2024 13:41:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:41:53 +0000   Mon, 20 May 2024 13:41:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    ha-170194-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 64924ff33ca44b9f8535eb50161a056c
	  System UUID:                64924ff3-3ca4-4b9f-8535-eb50161a056c
	  Boot ID:                    11751cc8-87fc-4d90-b8e4-fbebe91c028e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vr9tf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-170194-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-q72lt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-170194-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-170194-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-x79p4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-170194-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-170194-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 38s                kube-proxy       
	  Normal   RegisteredNode           11m                node-controller  Node ha-170194-m03 event: Registered Node ha-170194-m03 in Controller
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-170194-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-170194-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-170194-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-170194-m03 event: Registered Node ha-170194-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-170194-m03 event: Registered Node ha-170194-m03 in Controller
	  Normal   RegisteredNode           111s               node-controller  Node ha-170194-m03 event: Registered Node ha-170194-m03 in Controller
	  Normal   RegisteredNode           107s               node-controller  Node ha-170194-m03 event: Registered Node ha-170194-m03 in Controller
	  Normal   NodeNotReady             71s                node-controller  Node ha-170194-m03 status is now: NodeNotReady
	  Normal   Starting                 55s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  55s (x2 over 55s)  kubelet          Node ha-170194-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    55s (x2 over 55s)  kubelet          Node ha-170194-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s (x2 over 55s)  kubelet          Node ha-170194-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 55s                kubelet          Node ha-170194-m03 has been rebooted, boot id: 11751cc8-87fc-4d90-b8e4-fbebe91c028e
	  Normal   NodeReady                55s                kubelet          Node ha-170194-m03 status is now: NodeReady
	  Normal   RegisteredNode           27s                node-controller  Node ha-170194-m03 event: Registered Node ha-170194-m03 in Controller
	
	
	Name:               ha-170194-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170194-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=ha-170194
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T13_31_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:31:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170194-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:42:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:42:09 +0000   Mon, 20 May 2024 13:42:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:42:09 +0000   Mon, 20 May 2024 13:42:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:42:09 +0000   Mon, 20 May 2024 13:42:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:42:09 +0000   Mon, 20 May 2024 13:42:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.163
	  Hostname:    ha-170194-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 04786f3c085342e689c4ca279f442854
	  System UUID:                04786f3c-0853-42e6-89c4-ca279f442854
	  Boot ID:                    57cf420c-75d0-4b86-a49d-04839c715bec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-98pk9       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-52pf8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-170194-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-170194-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-170194-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-170194-m04 status is now: NodeReady
	  Normal   RegisteredNode           111s               node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal   RegisteredNode           107s               node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal   NodeNotReady             71s                node-controller  Node ha-170194-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           27s                node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-170194-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-170194-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-170194-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                 kubelet          Node ha-170194-m04 has been rebooted, boot id: 57cf420c-75d0-4b86-a49d-04839c715bec
	  Normal   NodeReady                9s                 kubelet          Node ha-170194-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.658967] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.056188] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056574] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.149929] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.138520] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.255022] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +3.918021] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.231733] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.055898] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.968265] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.072694] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.206801] kauditd_printk_skb: 21 callbacks suppressed
	[May20 13:29] kauditd_printk_skb: 74 callbacks suppressed
	[May20 13:39] systemd-fstab-generator[3681]: Ignoring "noauto" option for root device
	[  +0.147840] systemd-fstab-generator[3693]: Ignoring "noauto" option for root device
	[  +0.169526] systemd-fstab-generator[3707]: Ignoring "noauto" option for root device
	[  +0.139500] systemd-fstab-generator[3719]: Ignoring "noauto" option for root device
	[  +0.264446] systemd-fstab-generator[3747]: Ignoring "noauto" option for root device
	[  +1.391053] systemd-fstab-generator[3848]: Ignoring "noauto" option for root device
	[  +4.644038] kauditd_printk_skb: 126 callbacks suppressed
	[ +16.302134] kauditd_printk_skb: 82 callbacks suppressed
	[  +5.618456] kauditd_printk_skb: 1 callbacks suppressed
	[May20 13:40] kauditd_printk_skb: 6 callbacks suppressed
	[ +32.200348] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [718c55ec406ad49e98fb24805d401e94b9e216b4eb4a9377f977f2aea7059028] <==
	{"level":"warn","ts":"2024-05-20T13:41:17.928724Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"967c73ca63f4755d","rtt":"0s","error":"dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T13:41:17.928819Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"967c73ca63f4755d","rtt":"0s","error":"dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T13:41:19.130764Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.3:2380/version","remote-member-id":"967c73ca63f4755d","error":"Get \"https://192.168.39.3:2380/version\": dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T13:41:19.131016Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"967c73ca63f4755d","error":"Get \"https://192.168.39.3:2380/version\": dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T13:41:22.928974Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"967c73ca63f4755d","rtt":"0s","error":"dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T13:41:22.929037Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"967c73ca63f4755d","rtt":"0s","error":"dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T13:41:23.132619Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.3:2380/version","remote-member-id":"967c73ca63f4755d","error":"Get \"https://192.168.39.3:2380/version\": dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T13:41:23.132796Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"967c73ca63f4755d","error":"Get \"https://192.168.39.3:2380/version\": dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T13:41:27.134838Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.3:2380/version","remote-member-id":"967c73ca63f4755d","error":"Get \"https://192.168.39.3:2380/version\": dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T13:41:27.135027Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"967c73ca63f4755d","error":"Get \"https://192.168.39.3:2380/version\": dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T13:41:27.929444Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"967c73ca63f4755d","rtt":"0s","error":"dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T13:41:27.929468Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"967c73ca63f4755d","rtt":"0s","error":"dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T13:41:31.137013Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.3:2380/version","remote-member-id":"967c73ca63f4755d","error":"Get \"https://192.168.39.3:2380/version\": dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T13:41:31.137196Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"967c73ca63f4755d","error":"Get \"https://192.168.39.3:2380/version\": dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T13:41:32.930194Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"967c73ca63f4755d","rtt":"0s","error":"dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T13:41:32.93029Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"967c73ca63f4755d","rtt":"0s","error":"dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"info","ts":"2024-05-20T13:41:35.133593Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:41:35.144372Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:41:35.144701Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:41:35.164807Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d468df581a6d993d","to":"967c73ca63f4755d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-05-20T13:41:35.164877Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:41:35.194495Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d468df581a6d993d","to":"967c73ca63f4755d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-05-20T13:41:35.194578Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"warn","ts":"2024-05-20T13:41:45.961984Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.14535ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-05-20T13:41:45.962246Z","caller":"traceutil/trace.go:171","msg":"trace[2131067986] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:0; response_revision:2423; }","duration":"101.546067ms","start":"2024-05-20T13:41:45.860655Z","end":"2024-05-20T13:41:45.962202Z","steps":["trace[2131067986] 'count revisions from in-memory index tree'  (duration: 99.852001ms)"],"step_count":1}
	
	
	==> etcd [bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2] <==
	{"level":"info","ts":"2024-05-20T13:37:58.637559Z","caller":"traceutil/trace.go:171","msg":"trace[529815512] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; }","duration":"808.865735ms","start":"2024-05-20T13:37:57.828687Z","end":"2024-05-20T13:37:58.637553Z","steps":["trace[529815512] 'agreement among raft nodes before linearized reading'  (duration: 783.1836ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T13:37:58.637602Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T13:37:57.828675Z","time spent":"808.919041ms","remote":"127.0.0.1:54396","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":0,"response size":0,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" limit:500 "}
	2024/05/20 13:37:58 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-20T13:37:58.611888Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"676.653642ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-05-20T13:37:58.637827Z","caller":"traceutil/trace.go:171","msg":"trace[955076974] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; }","duration":"702.604633ms","start":"2024-05-20T13:37:57.935215Z","end":"2024-05-20T13:37:58.637819Z","steps":["trace[955076974] 'agreement among raft nodes before linearized reading'  (duration: 676.669951ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T13:37:58.637867Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T13:37:57.935185Z","time spent":"702.672493ms","remote":"127.0.0.1:54346","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:500 "}
	2024/05/20 13:37:58 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-05-20T13:37:58.684851Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"d468df581a6d993d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-20T13:37:58.685249Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"990f835a719da62f"}
	{"level":"info","ts":"2024-05-20T13:37:58.685308Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"990f835a719da62f"}
	{"level":"info","ts":"2024-05-20T13:37:58.685363Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"990f835a719da62f"}
	{"level":"info","ts":"2024-05-20T13:37:58.6855Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d468df581a6d993d","remote-peer-id":"990f835a719da62f"}
	{"level":"info","ts":"2024-05-20T13:37:58.68556Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d468df581a6d993d","remote-peer-id":"990f835a719da62f"}
	{"level":"info","ts":"2024-05-20T13:37:58.68562Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d468df581a6d993d","remote-peer-id":"990f835a719da62f"}
	{"level":"info","ts":"2024-05-20T13:37:58.685652Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"990f835a719da62f"}
	{"level":"info","ts":"2024-05-20T13:37:58.685677Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:37:58.685706Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:37:58.685763Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:37:58.685871Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:37:58.685976Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:37:58.686041Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:37:58.686073Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:37:58.689208Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.92:2380"}
	{"level":"info","ts":"2024-05-20T13:37:58.689333Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.92:2380"}
	{"level":"info","ts":"2024-05-20T13:37:58.689374Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-170194","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.92:2380"],"advertise-client-urls":["https://192.168.39.92:2379"]}
	
	
	==> kernel <==
	 13:42:18 up 14 min,  0 users,  load average: 0.42, 0.50, 0.36
	Linux ha-170194 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [45c16067478b03d8b1129f03123a30e9e5f4cffa48fd0f6d8d66a55129c1b931] <==
	I0520 13:41:45.202240       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	I0520 13:41:55.214532       1 main.go:223] Handling node with IPs: map[192.168.39.92:{}]
	I0520 13:41:55.214669       1 main.go:227] handling current node
	I0520 13:41:55.214700       1 main.go:223] Handling node with IPs: map[192.168.39.155:{}]
	I0520 13:41:55.214720       1 main.go:250] Node ha-170194-m02 has CIDR [10.244.1.0/24] 
	I0520 13:41:55.214875       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I0520 13:41:55.214898       1 main.go:250] Node ha-170194-m03 has CIDR [10.244.2.0/24] 
	I0520 13:41:55.215087       1 main.go:223] Handling node with IPs: map[192.168.39.163:{}]
	I0520 13:41:55.215119       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	I0520 13:42:05.223948       1 main.go:223] Handling node with IPs: map[192.168.39.92:{}]
	I0520 13:42:05.223998       1 main.go:227] handling current node
	I0520 13:42:05.224024       1 main.go:223] Handling node with IPs: map[192.168.39.155:{}]
	I0520 13:42:05.224031       1 main.go:250] Node ha-170194-m02 has CIDR [10.244.1.0/24] 
	I0520 13:42:05.224191       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I0520 13:42:05.224221       1 main.go:250] Node ha-170194-m03 has CIDR [10.244.2.0/24] 
	I0520 13:42:05.224310       1 main.go:223] Handling node with IPs: map[192.168.39.163:{}]
	I0520 13:42:05.224336       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	I0520 13:42:15.243053       1 main.go:223] Handling node with IPs: map[192.168.39.92:{}]
	I0520 13:42:15.243148       1 main.go:227] handling current node
	I0520 13:42:15.243164       1 main.go:223] Handling node with IPs: map[192.168.39.155:{}]
	I0520 13:42:15.243170       1 main.go:250] Node ha-170194-m02 has CIDR [10.244.1.0/24] 
	I0520 13:42:15.244003       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I0520 13:42:15.244060       1 main.go:250] Node ha-170194-m03 has CIDR [10.244.2.0/24] 
	I0520 13:42:15.244257       1 main.go:223] Handling node with IPs: map[192.168.39.163:{}]
	I0520 13:42:15.244287       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [6720f2ab5ded75d5497c5f267a5e6ce23c07eaab7d1c37a699eff5e21a187d99] <==
	I0520 13:39:37.900540       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0520 13:39:48.131268       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0520 13:39:49.536431       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 13:39:52.608562       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 13:39:59.328143       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.202:60718->10.96.0.1:443: read: connection reset by peer
	I0520 13:40:02.330359       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [ca3e13e017f2fa2c5625fcea967f30ed81a1fb98e926f9322dea1d8c9cf887e8] <==
	I0520 13:39:37.585083       1 options.go:221] external host was not specified, using 192.168.39.92
	I0520 13:39:37.593517       1 server.go:148] Version: v1.30.1
	I0520 13:39:37.593619       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:39:38.306349       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0520 13:39:38.317991       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 13:39:38.318371       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0520 13:39:38.318400       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0520 13:39:38.318569       1 instance.go:299] Using reconciler: lease
	W0520 13:39:58.307221       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0520 13:39:58.307336       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0520 13:39:58.319080       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f7785a144a0e759050cfe76f22440222e2f754cdf34117eb8599199c8ccb715d] <==
	I0520 13:40:19.207248       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0520 13:40:19.207415       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0520 13:40:19.287353       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 13:40:19.287381       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 13:40:19.288723       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 13:40:19.289364       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 13:40:19.289754       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 13:40:19.290210       1 aggregator.go:165] initial CRD sync complete...
	I0520 13:40:19.290248       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 13:40:19.290254       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 13:40:19.290259       1 cache.go:39] Caches are synced for autoregister controller
	I0520 13:40:19.290416       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 13:40:19.290445       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 13:40:19.298205       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 13:40:19.298797       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 13:40:19.316803       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 13:40:19.316858       1 policy_source.go:224] refreshing policies
	W0520 13:40:19.326177       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.155 192.168.39.3]
	I0520 13:40:19.327817       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 13:40:19.336338       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0520 13:40:19.340034       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0520 13:40:19.383713       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 13:40:20.199323       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0520 13:40:20.556233       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.155 192.168.39.3 192.168.39.92]
	W0520 13:40:30.565582       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.155 192.168.39.92]
	
	
	==> kube-controller-manager [49de9de51e3a448dc283be641c93a8ce95a1a415fb7865e3759421c56fefddd4] <==
	I0520 13:39:38.506182       1 serving.go:380] Generated self-signed cert in-memory
	I0520 13:39:38.854415       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0520 13:39:38.854475       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:39:38.856250       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0520 13:39:38.856399       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0520 13:39:38.856488       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0520 13:39:38.856724       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0520 13:39:59.326758       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.92:8443/healthz\": dial tcp 192.168.39.92:8443: connect: connection refused"
	
	
	==> kube-controller-manager [b195802788ab566858825c71059df3150db2401c9d5946c65949cdb2c9e71a42] <==
	I0520 13:40:31.715600       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-170194-m04"
	I0520 13:40:31.715837       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0520 13:40:31.822106       1 shared_informer.go:320] Caches are synced for cronjob
	I0520 13:40:31.842705       1 shared_informer.go:320] Caches are synced for disruption
	I0520 13:40:31.857567       1 shared_informer.go:320] Caches are synced for stateful set
	I0520 13:40:31.871471       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 13:40:31.898357       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 13:40:32.296244       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 13:40:32.296280       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 13:40:32.303326       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 13:40:35.417964       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.267µs"
	I0520 13:40:40.887696       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.614812ms"
	I0520 13:40:40.887804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.037µs"
	I0520 13:40:49.526867       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="69.628313ms"
	E0520 13:40:49.527041       1 replica_set.go:557] sync "kube-system/coredns-7db6d8ff4d" failed with Operation cannot be fulfilled on replicasets.apps "coredns-7db6d8ff4d": the object has been modified; please apply your changes to the latest version and try again
	I0520 13:40:49.527346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="183.682µs"
	I0520 13:40:49.529531       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-vpvxn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-vpvxn\": the object has been modified; please apply your changes to the latest version and try again"
	I0520 13:40:49.530015       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6bce3432-4ad5-4c9f-b97f-2c4f42697a37", APIVersion:"v1", ResourceVersion:"255", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-vpvxn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-vpvxn": the object has been modified; please apply your changes to the latest version and try again
	I0520 13:40:49.532729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.852µs"
	I0520 13:41:07.752624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.191218ms"
	I0520 13:41:07.754371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="229.796µs"
	I0520 13:41:24.214669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.513µs"
	I0520 13:41:42.522290       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.82893ms"
	I0520 13:41:42.522565       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.17µs"
	I0520 13:42:09.838808       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-170194-m04"
	
	
	==> kube-proxy [2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b] <==
	E0520 13:36:41.504390       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:36:41.504523       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:36:41.504631       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:36:48.736491       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1774": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:36:48.736639       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1774": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:36:48.736644       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:36:48.738077       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:36:48.736492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:36:48.738155       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:36:57.953097       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:36:57.953309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:36:57.953481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1774": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:36:57.953570       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1774": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:37:01.025856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:37:01.026070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:37:16.385872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:37:16.385990       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:37:16.386108       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1774": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:37:16.386140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1774": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:37:28.673455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:37:28.674191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:37:50.176518       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:37:50.177191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:37:53.249191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1774": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:37:53.249391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1774": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [3df09e40b72acd0754deff4f383a3f3ea5658277934b59145e3842be2182ce97] <==
	I0520 13:39:38.736717       1 server_linux.go:69] "Using iptables proxy"
	E0520 13:39:40.769149       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-170194\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 13:39:43.840607       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-170194\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 13:39:46.912709       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-170194\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 13:39:53.068345       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-170194\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 13:40:05.345686       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-170194\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0520 13:40:23.435261       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.92"]
	I0520 13:40:23.480236       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 13:40:23.480350       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 13:40:23.480381       1 server_linux.go:165] "Using iptables Proxier"
	I0520 13:40:23.482810       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 13:40:23.483197       1 server.go:872] "Version info" version="v1.30.1"
	I0520 13:40:23.483473       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:40:23.485294       1 config.go:192] "Starting service config controller"
	I0520 13:40:23.485371       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 13:40:23.485414       1 config.go:101] "Starting endpoint slice config controller"
	I0520 13:40:23.485430       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 13:40:23.486336       1 config.go:319] "Starting node config controller"
	I0520 13:40:23.487398       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 13:40:23.586184       1 shared_informer.go:320] Caches are synced for service config
	I0520 13:40:23.586350       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 13:40:23.587693       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5c64aa00ef2d64c3757c1d4fc0eeb94d8397d4a8fc011a7aeccb9edb1566b91b] <==
	W0520 13:40:14.347147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.92:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	E0520 13:40:14.347219       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.92:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	W0520 13:40:14.408327       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.92:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	E0520 13:40:14.408447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.92:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	W0520 13:40:15.009193       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.92:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	E0520 13:40:15.009268       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.92:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	W0520 13:40:16.183381       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.92:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	E0520 13:40:16.183500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.92:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	W0520 13:40:16.869485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.92:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	E0520 13:40:16.869559       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.92:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	W0520 13:40:16.959516       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.92:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	E0520 13:40:16.959583       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.92:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	W0520 13:40:17.039559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.92:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	E0520 13:40:17.039625       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.92:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	W0520 13:40:17.116491       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.92:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	E0520 13:40:17.116560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.92:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	W0520 13:40:19.215485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 13:40:19.215692       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 13:40:19.216670       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 13:40:19.224986       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 13:40:19.216831       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 13:40:19.220874       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 13:40:19.225319       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 13:40:19.225301       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0520 13:40:32.933537       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8] <==
	W0520 13:37:51.909356       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 13:37:51.909543       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 13:37:52.186504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 13:37:52.186546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 13:37:52.657822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 13:37:52.658002       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 13:37:52.791294       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 13:37:52.791360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 13:37:52.955870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 13:37:52.956788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 13:37:53.098534       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 13:37:53.098585       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 13:37:53.231417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 13:37:53.231471       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 13:37:53.518500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 13:37:53.518658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 13:37:54.185987       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 13:37:54.186119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 13:37:54.335463       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 13:37:54.335511       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 13:37:54.862886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 13:37:54.862951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0520 13:37:58.587647       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0520 13:37:58.587975       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0520 13:37:58.588083       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 20 13:40:14 ha-170194 kubelet[1373]: I0520 13:40:14.560270    1373 status_manager.go:853] "Failed to get status for pod" podUID="44545bda-e29b-44d6-97f7-45290fda6e37" pod="kube-system/kindnet-cmd8x" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-cmd8x\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 20 13:40:14 ha-170194 kubelet[1373]: E0520 13:40:14.561036    1373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-170194?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	May 20 13:40:14 ha-170194 kubelet[1373]: E0520 13:40:14.561169    1373 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-170194.17d135de8044506f  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-170194,UID:61864ede3229362b45cbdcfb69b0adfa,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-170194,},FirstTimestamp:2024-05-20 13:36:02.755842159 +0000 UTC m=+448.652597477,LastTimestamp:2024-05-20 13:36:02.755842159 +0000 UTC m=+448.652597477,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related
:nil,ReportingController:kubelet,ReportingInstance:ha-170194,}"
	May 20 13:40:17 ha-170194 kubelet[1373]: I0520 13:40:17.251998    1373 scope.go:117] "RemoveContainer" containerID="ca3e13e017f2fa2c5625fcea967f30ed81a1fb98e926f9322dea1d8c9cf887e8"
	May 20 13:40:17 ha-170194 kubelet[1373]: I0520 13:40:17.632475    1373 status_manager.go:853] "Failed to get status for pod" podUID="334eb0ed-c771-4840-92d1-04c1b9ec5179" pod="kube-system/coredns-7db6d8ff4d-vk78q" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vk78q\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 20 13:40:20 ha-170194 kubelet[1373]: I0520 13:40:20.252988    1373 scope.go:117] "RemoveContainer" containerID="6720f2ab5ded75d5497c5f267a5e6ce23c07eaab7d1c37a699eff5e21a187d99"
	May 20 13:40:20 ha-170194 kubelet[1373]: E0520 13:40:20.255749    1373 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kindnet-cni pod=kindnet-cmd8x_kube-system(44545bda-e29b-44d6-97f7-45290fda6e37)\"" pod="kube-system/kindnet-cmd8x" podUID="44545bda-e29b-44d6-97f7-45290fda6e37"
	May 20 13:40:20 ha-170194 kubelet[1373]: E0520 13:40:20.704332    1373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-170194\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-170194?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 20 13:40:20 ha-170194 kubelet[1373]: I0520 13:40:20.705273    1373 status_manager.go:853] "Failed to get status for pod" podUID="ce0e094e-1f65-407a-9fc0-a3c55f9de344" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 20 13:40:21 ha-170194 kubelet[1373]: I0520 13:40:21.252798    1373 scope.go:117] "RemoveContainer" containerID="3ad4de192a107b941a2ab04025b0ddb05103fa49eeb708dc985d66b444e4ed86"
	May 20 13:40:34 ha-170194 kubelet[1373]: I0520 13:40:34.262286    1373 scope.go:117] "RemoveContainer" containerID="6720f2ab5ded75d5497c5f267a5e6ce23c07eaab7d1c37a699eff5e21a187d99"
	May 20 13:40:34 ha-170194 kubelet[1373]: E0520 13:40:34.301063    1373 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:40:34 ha-170194 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:40:34 ha-170194 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:40:34 ha-170194 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:40:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:40:34 ha-170194 kubelet[1373]: I0520 13:40:34.390780    1373 scope.go:117] "RemoveContainer" containerID="aec5b752545e8d9abd4d44817bed499e6bef842a475cd12e2a3dee7cadd5e0dc"
	May 20 13:40:56 ha-170194 kubelet[1373]: I0520 13:40:56.252551    1373 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-170194" podUID="aed1bd37-f323-4950-b9d0-43e5e2eef5b7"
	May 20 13:40:56 ha-170194 kubelet[1373]: I0520 13:40:56.290516    1373 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-170194"
	May 20 13:40:56 ha-170194 kubelet[1373]: I0520 13:40:56.832309    1373 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-170194" podUID="aed1bd37-f323-4950-b9d0-43e5e2eef5b7"
	May 20 13:41:34 ha-170194 kubelet[1373]: E0520 13:41:34.278317    1373 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:41:34 ha-170194 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:41:34 ha-170194 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:41:34 ha-170194 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:41:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 13:42:17.586659  631874 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18929-602525/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-170194 -n ha-170194
helpers_test.go:261: (dbg) Run:  kubectl --context ha-170194 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (384.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 stop -v=7 --alsologtostderr
E0520 13:43:01.807526  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-170194 stop -v=7 --alsologtostderr: exit status 82 (2m0.490591572s)

                                                
                                                
-- stdout --
	* Stopping node "ha-170194-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:42:37.198457  632313 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:42:37.198576  632313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:42:37.198585  632313 out.go:304] Setting ErrFile to fd 2...
	I0520 13:42:37.198589  632313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:42:37.198755  632313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:42:37.198956  632313 out.go:298] Setting JSON to false
	I0520 13:42:37.199026  632313 mustload.go:65] Loading cluster: ha-170194
	I0520 13:42:37.199357  632313 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:42:37.199442  632313 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:42:37.199642  632313 mustload.go:65] Loading cluster: ha-170194
	I0520 13:42:37.199768  632313 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:42:37.199795  632313 stop.go:39] StopHost: ha-170194-m04
	I0520 13:42:37.200134  632313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:42:37.200188  632313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:42:37.216940  632313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40187
	I0520 13:42:37.217457  632313 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:42:37.218040  632313 main.go:141] libmachine: Using API Version  1
	I0520 13:42:37.218063  632313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:42:37.218490  632313 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:42:37.223647  632313 out.go:177] * Stopping node "ha-170194-m04"  ...
	I0520 13:42:37.226162  632313 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 13:42:37.226207  632313 main.go:141] libmachine: (ha-170194-m04) Calling .DriverName
	I0520 13:42:37.226493  632313 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 13:42:37.226526  632313 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHHostname
	I0520 13:42:37.229911  632313 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:42:37.230444  632313 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:42:04 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:42:37.230483  632313 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:42:37.230748  632313 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHPort
	I0520 13:42:37.230948  632313 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHKeyPath
	I0520 13:42:37.231144  632313 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHUsername
	I0520 13:42:37.231329  632313 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m04/id_rsa Username:docker}
	I0520 13:42:37.316840  632313 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 13:42:37.370367  632313 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0520 13:42:37.423558  632313 main.go:141] libmachine: Stopping "ha-170194-m04"...
	I0520 13:42:37.423585  632313 main.go:141] libmachine: (ha-170194-m04) Calling .GetState
	I0520 13:42:37.425226  632313 main.go:141] libmachine: (ha-170194-m04) Calling .Stop
	I0520 13:42:37.428885  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 0/120
	I0520 13:42:38.430362  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 1/120
	I0520 13:42:39.432007  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 2/120
	I0520 13:42:40.433847  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 3/120
	I0520 13:42:41.435832  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 4/120
	I0520 13:42:42.437887  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 5/120
	I0520 13:42:43.439267  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 6/120
	I0520 13:42:44.440739  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 7/120
	I0520 13:42:45.442272  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 8/120
	I0520 13:42:46.444205  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 9/120
	I0520 13:42:47.446739  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 10/120
	I0520 13:42:48.448300  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 11/120
	I0520 13:42:49.450369  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 12/120
	I0520 13:42:50.452106  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 13/120
	I0520 13:42:51.453561  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 14/120
	I0520 13:42:52.455408  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 15/120
	I0520 13:42:53.456632  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 16/120
	I0520 13:42:54.458578  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 17/120
	I0520 13:42:55.459961  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 18/120
	I0520 13:42:56.461448  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 19/120
	I0520 13:42:57.463779  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 20/120
	I0520 13:42:58.465348  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 21/120
	I0520 13:42:59.466643  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 22/120
	I0520 13:43:00.468242  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 23/120
	I0520 13:43:01.469764  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 24/120
	I0520 13:43:02.471322  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 25/120
	I0520 13:43:03.472822  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 26/120
	I0520 13:43:04.474432  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 27/120
	I0520 13:43:05.475890  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 28/120
	I0520 13:43:06.477483  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 29/120
	I0520 13:43:07.479429  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 30/120
	I0520 13:43:08.481133  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 31/120
	I0520 13:43:09.482603  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 32/120
	I0520 13:43:10.484368  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 33/120
	I0520 13:43:11.486696  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 34/120
	I0520 13:43:12.488652  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 35/120
	I0520 13:43:13.490425  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 36/120
	I0520 13:43:14.491919  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 37/120
	I0520 13:43:15.494064  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 38/120
	I0520 13:43:16.495487  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 39/120
	I0520 13:43:17.497942  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 40/120
	I0520 13:43:18.499548  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 41/120
	I0520 13:43:19.501025  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 42/120
	I0520 13:43:20.502418  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 43/120
	I0520 13:43:21.503837  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 44/120
	I0520 13:43:22.505899  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 45/120
	I0520 13:43:23.507834  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 46/120
	I0520 13:43:24.509640  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 47/120
	I0520 13:43:25.511858  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 48/120
	I0520 13:43:26.512987  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 49/120
	I0520 13:43:27.515350  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 50/120
	I0520 13:43:28.517298  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 51/120
	I0520 13:43:29.518460  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 52/120
	I0520 13:43:30.520293  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 53/120
	I0520 13:43:31.521566  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 54/120
	I0520 13:43:32.523422  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 55/120
	I0520 13:43:33.524856  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 56/120
	I0520 13:43:34.526682  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 57/120
	I0520 13:43:35.528204  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 58/120
	I0520 13:43:36.529542  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 59/120
	I0520 13:43:37.531521  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 60/120
	I0520 13:43:38.532961  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 61/120
	I0520 13:43:39.534329  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 62/120
	I0520 13:43:40.536000  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 63/120
	I0520 13:43:41.537318  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 64/120
	I0520 13:43:42.539284  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 65/120
	I0520 13:43:43.540721  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 66/120
	I0520 13:43:44.542105  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 67/120
	I0520 13:43:45.543629  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 68/120
	I0520 13:43:46.545052  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 69/120
	I0520 13:43:47.547284  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 70/120
	I0520 13:43:48.548688  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 71/120
	I0520 13:43:49.550058  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 72/120
	I0520 13:43:50.551804  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 73/120
	I0520 13:43:51.553055  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 74/120
	I0520 13:43:52.554714  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 75/120
	I0520 13:43:53.556534  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 76/120
	I0520 13:43:54.558023  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 77/120
	I0520 13:43:55.559769  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 78/120
	I0520 13:43:56.561278  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 79/120
	I0520 13:43:57.563583  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 80/120
	I0520 13:43:58.565196  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 81/120
	I0520 13:43:59.566813  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 82/120
	I0520 13:44:00.568530  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 83/120
	I0520 13:44:01.570535  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 84/120
	I0520 13:44:02.572808  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 85/120
	I0520 13:44:03.574265  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 86/120
	I0520 13:44:04.575901  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 87/120
	I0520 13:44:05.577470  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 88/120
	I0520 13:44:06.579606  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 89/120
	I0520 13:44:07.581609  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 90/120
	I0520 13:44:08.583924  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 91/120
	I0520 13:44:09.586536  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 92/120
	I0520 13:44:10.588056  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 93/120
	I0520 13:44:11.590323  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 94/120
	I0520 13:44:12.592302  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 95/120
	I0520 13:44:13.593911  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 96/120
	I0520 13:44:14.595331  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 97/120
	I0520 13:44:15.596833  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 98/120
	I0520 13:44:16.599164  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 99/120
	I0520 13:44:17.601286  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 100/120
	I0520 13:44:18.603247  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 101/120
	I0520 13:44:19.605065  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 102/120
	I0520 13:44:20.606553  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 103/120
	I0520 13:44:21.607997  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 104/120
	I0520 13:44:22.609863  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 105/120
	I0520 13:44:23.611438  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 106/120
	I0520 13:44:24.613225  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 107/120
	I0520 13:44:25.614818  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 108/120
	I0520 13:44:26.616299  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 109/120
	I0520 13:44:27.617670  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 110/120
	I0520 13:44:28.620168  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 111/120
	I0520 13:44:29.621861  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 112/120
	I0520 13:44:30.623279  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 113/120
	I0520 13:44:31.624826  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 114/120
	I0520 13:44:32.626976  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 115/120
	I0520 13:44:33.628693  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 116/120
	I0520 13:44:34.630175  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 117/120
	I0520 13:44:35.631587  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 118/120
	I0520 13:44:36.633946  632313 main.go:141] libmachine: (ha-170194-m04) Waiting for machine to stop 119/120
	I0520 13:44:37.634580  632313 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0520 13:44:37.634639  632313 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0520 13:44:37.637365  632313 out.go:177] 
	W0520 13:44:37.639397  632313 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0520 13:44:37.639417  632313 out.go:239] * 
	* 
	W0520 13:44:37.642188  632313 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 13:44:37.644727  632313 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-170194 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr: exit status 3 (19.103201396s)

                                                
                                                
-- stdout --
	ha-170194
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170194-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:44:37.693887  632733 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:44:37.694181  632733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:44:37.694193  632733 out.go:304] Setting ErrFile to fd 2...
	I0520 13:44:37.694198  632733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:44:37.694410  632733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:44:37.694607  632733 out.go:298] Setting JSON to false
	I0520 13:44:37.694635  632733 mustload.go:65] Loading cluster: ha-170194
	I0520 13:44:37.694757  632733 notify.go:220] Checking for updates...
	I0520 13:44:37.695062  632733 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:44:37.695084  632733 status.go:255] checking status of ha-170194 ...
	I0520 13:44:37.695514  632733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:44:37.695644  632733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:44:37.713085  632733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35917
	I0520 13:44:37.713590  632733 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:44:37.714267  632733 main.go:141] libmachine: Using API Version  1
	I0520 13:44:37.714292  632733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:44:37.714618  632733 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:44:37.714820  632733 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:44:37.716246  632733 status.go:330] ha-170194 host status = "Running" (err=<nil>)
	I0520 13:44:37.716265  632733 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:44:37.716685  632733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:44:37.716738  632733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:44:37.732306  632733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37371
	I0520 13:44:37.732849  632733 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:44:37.733431  632733 main.go:141] libmachine: Using API Version  1
	I0520 13:44:37.733453  632733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:44:37.733860  632733 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:44:37.734056  632733 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:44:37.737487  632733 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:44:37.737992  632733 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:44:37.738028  632733 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:44:37.738160  632733 host.go:66] Checking if "ha-170194" exists ...
	I0520 13:44:37.738562  632733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:44:37.738608  632733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:44:37.754116  632733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I0520 13:44:37.754560  632733 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:44:37.755068  632733 main.go:141] libmachine: Using API Version  1
	I0520 13:44:37.755090  632733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:44:37.755438  632733 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:44:37.755786  632733 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:44:37.756063  632733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:44:37.756090  632733 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:44:37.759190  632733 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:44:37.759564  632733 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:44:37.759597  632733 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:44:37.759794  632733 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:44:37.759996  632733 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:44:37.760186  632733 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:44:37.760370  632733 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:44:37.850281  632733 ssh_runner.go:195] Run: systemctl --version
	I0520 13:44:37.857669  632733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:44:37.875891  632733 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:44:37.875928  632733 api_server.go:166] Checking apiserver status ...
	I0520 13:44:37.875961  632733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:44:37.893795  632733 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5016/cgroup
	W0520 13:44:37.902797  632733 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5016/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:44:37.902863  632733 ssh_runner.go:195] Run: ls
	I0520 13:44:37.907241  632733 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:44:37.912075  632733 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:44:37.912103  632733 status.go:422] ha-170194 apiserver status = Running (err=<nil>)
	I0520 13:44:37.912118  632733 status.go:257] ha-170194 status: &{Name:ha-170194 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:44:37.912141  632733 status.go:255] checking status of ha-170194-m02 ...
	I0520 13:44:37.912581  632733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:44:37.912630  632733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:44:37.928818  632733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46447
	I0520 13:44:37.929594  632733 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:44:37.931292  632733 main.go:141] libmachine: Using API Version  1
	I0520 13:44:37.931320  632733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:44:37.931743  632733 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:44:37.931991  632733 main.go:141] libmachine: (ha-170194-m02) Calling .GetState
	I0520 13:44:37.933725  632733 status.go:330] ha-170194-m02 host status = "Running" (err=<nil>)
	I0520 13:44:37.933749  632733 host.go:66] Checking if "ha-170194-m02" exists ...
	I0520 13:44:37.934034  632733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:44:37.934072  632733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:44:37.949692  632733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45399
	I0520 13:44:37.950154  632733 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:44:37.950692  632733 main.go:141] libmachine: Using API Version  1
	I0520 13:44:37.950716  632733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:44:37.951083  632733 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:44:37.951304  632733 main.go:141] libmachine: (ha-170194-m02) Calling .GetIP
	I0520 13:44:37.954245  632733 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:44:37.954706  632733 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:39:43 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:44:37.954730  632733 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:44:37.954902  632733 host.go:66] Checking if "ha-170194-m02" exists ...
	I0520 13:44:37.955204  632733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:44:37.955243  632733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:44:37.970696  632733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45927
	I0520 13:44:37.971331  632733 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:44:37.971966  632733 main.go:141] libmachine: Using API Version  1
	I0520 13:44:37.971993  632733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:44:37.972388  632733 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:44:37.972597  632733 main.go:141] libmachine: (ha-170194-m02) Calling .DriverName
	I0520 13:44:37.972788  632733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:44:37.972813  632733 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHHostname
	I0520 13:44:37.976052  632733 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:44:37.976581  632733 main.go:141] libmachine: (ha-170194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:bd:91", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:39:43 +0000 UTC Type:0 Mac:52:54:00:3b:bd:91 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-170194-m02 Clientid:01:52:54:00:3b:bd:91}
	I0520 13:44:37.976612  632733 main.go:141] libmachine: (ha-170194-m02) DBG | domain ha-170194-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:3b:bd:91 in network mk-ha-170194
	I0520 13:44:37.976754  632733 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHPort
	I0520 13:44:37.976964  632733 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHKeyPath
	I0520 13:44:37.977112  632733 main.go:141] libmachine: (ha-170194-m02) Calling .GetSSHUsername
	I0520 13:44:37.977293  632733 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m02/id_rsa Username:docker}
	I0520 13:44:38.062932  632733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:44:38.081298  632733 kubeconfig.go:125] found "ha-170194" server: "https://192.168.39.254:8443"
	I0520 13:44:38.081336  632733 api_server.go:166] Checking apiserver status ...
	I0520 13:44:38.081382  632733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:44:38.099921  632733 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1366/cgroup
	W0520 13:44:38.110243  632733 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1366/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:44:38.110380  632733 ssh_runner.go:195] Run: ls
	I0520 13:44:38.115257  632733 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 13:44:38.119810  632733 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 13:44:38.119838  632733 status.go:422] ha-170194-m02 apiserver status = Running (err=<nil>)
	I0520 13:44:38.119847  632733 status.go:257] ha-170194-m02 status: &{Name:ha-170194-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:44:38.119871  632733 status.go:255] checking status of ha-170194-m04 ...
	I0520 13:44:38.120191  632733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:44:38.120228  632733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:44:38.135403  632733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0520 13:44:38.135960  632733 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:44:38.136454  632733 main.go:141] libmachine: Using API Version  1
	I0520 13:44:38.136474  632733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:44:38.136841  632733 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:44:38.137069  632733 main.go:141] libmachine: (ha-170194-m04) Calling .GetState
	I0520 13:44:38.138840  632733 status.go:330] ha-170194-m04 host status = "Running" (err=<nil>)
	I0520 13:44:38.138857  632733 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:44:38.139122  632733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:44:38.139156  632733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:44:38.154097  632733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40077
	I0520 13:44:38.154570  632733 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:44:38.155054  632733 main.go:141] libmachine: Using API Version  1
	I0520 13:44:38.155070  632733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:44:38.155426  632733 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:44:38.155592  632733 main.go:141] libmachine: (ha-170194-m04) Calling .GetIP
	I0520 13:44:38.158595  632733 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:44:38.159071  632733 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:42:04 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:44:38.159101  632733 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:44:38.159279  632733 host.go:66] Checking if "ha-170194-m04" exists ...
	I0520 13:44:38.159581  632733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:44:38.159628  632733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:44:38.175982  632733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39437
	I0520 13:44:38.176405  632733 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:44:38.176854  632733 main.go:141] libmachine: Using API Version  1
	I0520 13:44:38.176875  632733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:44:38.177204  632733 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:44:38.177433  632733 main.go:141] libmachine: (ha-170194-m04) Calling .DriverName
	I0520 13:44:38.177611  632733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:44:38.177632  632733 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHHostname
	I0520 13:44:38.180862  632733 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:44:38.181416  632733 main.go:141] libmachine: (ha-170194-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:5c:04", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:42:04 +0000 UTC Type:0 Mac:52:54:00:73:5c:04 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-170194-m04 Clientid:01:52:54:00:73:5c:04}
	I0520 13:44:38.181446  632733 main.go:141] libmachine: (ha-170194-m04) DBG | domain ha-170194-m04 has defined IP address 192.168.39.163 and MAC address 52:54:00:73:5c:04 in network mk-ha-170194
	I0520 13:44:38.181638  632733 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHPort
	I0520 13:44:38.181850  632733 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHKeyPath
	I0520 13:44:38.181999  632733 main.go:141] libmachine: (ha-170194-m04) Calling .GetSSHUsername
	I0520 13:44:38.182122  632733 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194-m04/id_rsa Username:docker}
	W0520 13:44:56.749605  632733 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.163:22: connect: no route to host
	W0520 13:44:56.749714  632733 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	E0520 13:44:56.749739  632733 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	I0520 13:44:56.749749  632733 status.go:257] ha-170194-m04 status: &{Name:ha-170194-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0520 13:44:56.749772  632733 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-170194 -n ha-170194
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-170194 logs -n 25: (1.693826024s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-170194 ssh -n ha-170194-m02 sudo cat                                          | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m03_ha-170194-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m03:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04:/home/docker/cp-test_ha-170194-m03_ha-170194-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194-m04 sudo cat                                          | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m03_ha-170194-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-170194 cp testdata/cp-test.txt                                                | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1524472985/001/cp-test_ha-170194-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194:/home/docker/cp-test_ha-170194-m04_ha-170194.txt                       |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194 sudo cat                                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m04_ha-170194.txt                                 |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m02:/home/docker/cp-test_ha-170194-m04_ha-170194-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194-m02 sudo cat                                          | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m04_ha-170194-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m03:/home/docker/cp-test_ha-170194-m04_ha-170194-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n                                                                 | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | ha-170194-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-170194 ssh -n ha-170194-m03 sudo cat                                          | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC | 20 May 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-170194-m04_ha-170194-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-170194 node stop m02 -v=7                                                     | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-170194 node start m02 -v=7                                                    | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-170194 -v=7                                                           | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-170194 -v=7                                                                | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-170194 --wait=true -v=7                                                    | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:37 UTC | 20 May 24 13:42 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-170194                                                                | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:42 UTC |                     |
	| node    | ha-170194 node delete m03 -v=7                                                   | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:42 UTC | 20 May 24 13:42 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-170194 stop -v=7                                                              | ha-170194 | jenkins | v1.33.1 | 20 May 24 13:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 13:37:57
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 13:37:57.616877  630458 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:37:57.617122  630458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:37:57.617131  630458 out.go:304] Setting ErrFile to fd 2...
	I0520 13:37:57.617135  630458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:37:57.617344  630458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:37:57.617880  630458 out.go:298] Setting JSON to false
	I0520 13:37:57.618851  630458 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":12018,"bootTime":1716200260,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 13:37:57.618914  630458 start.go:139] virtualization: kvm guest
	I0520 13:37:57.622275  630458 out.go:177] * [ha-170194] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 13:37:57.624803  630458 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 13:37:57.624772  630458 notify.go:220] Checking for updates...
	I0520 13:37:57.627055  630458 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 13:37:57.629285  630458 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 13:37:57.631572  630458 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:37:57.633653  630458 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 13:37:57.635704  630458 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 13:37:57.638347  630458 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:37:57.638485  630458 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 13:37:57.639000  630458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:37:57.639091  630458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:37:57.654493  630458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35663
	I0520 13:37:57.654930  630458 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:37:57.655509  630458 main.go:141] libmachine: Using API Version  1
	I0520 13:37:57.655537  630458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:37:57.655941  630458 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:37:57.656160  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:37:57.693674  630458 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 13:37:57.695820  630458 start.go:297] selected driver: kvm2
	I0520 13:37:57.695844  630458 start.go:901] validating driver "kvm2" against &{Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.163 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:37:57.696007  630458 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 13:37:57.696378  630458 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:37:57.696480  630458 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 13:37:57.712550  630458 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 13:37:57.713202  630458 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 13:37:57.713275  630458 cni.go:84] Creating CNI manager for ""
	I0520 13:37:57.713288  630458 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0520 13:37:57.713339  630458 start.go:340] cluster config:
	{Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.163 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:37:57.713484  630458 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:37:57.716212  630458 out.go:177] * Starting "ha-170194" primary control-plane node in "ha-170194" cluster
	I0520 13:37:57.718243  630458 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:37:57.718292  630458 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 13:37:57.718303  630458 cache.go:56] Caching tarball of preloaded images
	I0520 13:37:57.718408  630458 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:37:57.718421  630458 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 13:37:57.718537  630458 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/config.json ...
	I0520 13:37:57.718772  630458 start.go:360] acquireMachinesLock for ha-170194: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:37:57.718823  630458 start.go:364] duration metric: took 30.908µs to acquireMachinesLock for "ha-170194"
	I0520 13:37:57.718842  630458 start.go:96] Skipping create...Using existing machine configuration
	I0520 13:37:57.718856  630458 fix.go:54] fixHost starting: 
	I0520 13:37:57.719153  630458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:37:57.719194  630458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:37:57.733611  630458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44371
	I0520 13:37:57.734021  630458 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:37:57.734497  630458 main.go:141] libmachine: Using API Version  1
	I0520 13:37:57.734519  630458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:37:57.734897  630458 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:37:57.735110  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:37:57.735290  630458 main.go:141] libmachine: (ha-170194) Calling .GetState
	I0520 13:37:57.736883  630458 fix.go:112] recreateIfNeeded on ha-170194: state=Running err=<nil>
	W0520 13:37:57.736908  630458 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 13:37:57.740905  630458 out.go:177] * Updating the running kvm2 "ha-170194" VM ...
	I0520 13:37:57.743151  630458 machine.go:94] provisionDockerMachine start ...
	I0520 13:37:57.743184  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:37:57.743445  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:37:57.746170  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:57.746596  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:37:57.746622  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:57.746803  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:37:57.746999  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:57.747190  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:57.747336  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:37:57.747535  630458 main.go:141] libmachine: Using SSH client type: native
	I0520 13:37:57.747708  630458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:37:57.747719  630458 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 13:37:57.850683  630458 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-170194
	
	I0520 13:37:57.850729  630458 main.go:141] libmachine: (ha-170194) Calling .GetMachineName
	I0520 13:37:57.850966  630458 buildroot.go:166] provisioning hostname "ha-170194"
	I0520 13:37:57.850985  630458 main.go:141] libmachine: (ha-170194) Calling .GetMachineName
	I0520 13:37:57.851243  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:37:57.854177  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:57.854630  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:37:57.854657  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:57.854840  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:37:57.855050  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:57.855206  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:57.855347  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:37:57.855516  630458 main.go:141] libmachine: Using SSH client type: native
	I0520 13:37:57.855673  630458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:37:57.855686  630458 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-170194 && echo "ha-170194" | sudo tee /etc/hostname
	I0520 13:37:57.973785  630458 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-170194
	
	I0520 13:37:57.973833  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:37:57.976774  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:57.977261  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:37:57.977290  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:57.977566  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:37:57.977808  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:57.977987  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:57.978158  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:37:57.978347  630458 main.go:141] libmachine: Using SSH client type: native
	I0520 13:37:57.978501  630458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:37:57.978516  630458 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-170194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-170194/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-170194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 13:37:58.078666  630458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:37:58.078696  630458 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 13:37:58.078723  630458 buildroot.go:174] setting up certificates
	I0520 13:37:58.078731  630458 provision.go:84] configureAuth start
	I0520 13:37:58.078743  630458 main.go:141] libmachine: (ha-170194) Calling .GetMachineName
	I0520 13:37:58.079084  630458 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:37:58.082173  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:58.082649  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:37:58.082683  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:58.082825  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:37:58.084988  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:58.085442  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:37:58.085470  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:58.085590  630458 provision.go:143] copyHostCerts
	I0520 13:37:58.085618  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:37:58.085652  630458 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 13:37:58.085671  630458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 13:37:58.085730  630458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 13:37:58.086111  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:37:58.086155  630458 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 13:37:58.086168  630458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 13:37:58.086218  630458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 13:37:58.086299  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:37:58.086327  630458 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 13:37:58.086338  630458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 13:37:58.086376  630458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 13:37:58.086452  630458 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.ha-170194 san=[127.0.0.1 192.168.39.92 ha-170194 localhost minikube]
	I0520 13:37:58.316882  630458 provision.go:177] copyRemoteCerts
	I0520 13:37:58.316959  630458 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 13:37:58.316988  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:37:58.319920  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:58.320406  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:37:58.320442  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:58.320592  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:37:58.320811  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:58.320987  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:37:58.321186  630458 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:37:58.399581  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 13:37:58.399682  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 13:37:58.422656  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 13:37:58.422736  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0520 13:37:58.445502  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 13:37:58.445580  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 13:37:58.468669  630458 provision.go:87] duration metric: took 389.92062ms to configureAuth
	I0520 13:37:58.468709  630458 buildroot.go:189] setting minikube options for container-runtime
	I0520 13:37:58.468932  630458 config.go:182] Loaded profile config "ha-170194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:37:58.469026  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:37:58.471766  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:58.472102  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:37:58.472131  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:37:58.472314  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:37:58.472523  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:58.472691  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:37:58.472846  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:37:58.473023  630458 main.go:141] libmachine: Using SSH client type: native
	I0520 13:37:58.473186  630458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:37:58.473201  630458 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 13:39:29.348044  630458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 13:39:29.348083  630458 machine.go:97] duration metric: took 1m31.604906844s to provisionDockerMachine
	I0520 13:39:29.348095  630458 start.go:293] postStartSetup for "ha-170194" (driver="kvm2")
	I0520 13:39:29.348107  630458 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 13:39:29.348125  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:39:29.348528  630458 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 13:39:29.348563  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:39:29.351804  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.352328  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:39:29.352351  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.352634  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:39:29.352863  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:39:29.353084  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:39:29.353269  630458 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:39:29.433942  630458 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 13:39:29.438165  630458 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 13:39:29.438195  630458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 13:39:29.438263  630458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 13:39:29.438375  630458 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 13:39:29.438393  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /etc/ssl/certs/6098672.pem
	I0520 13:39:29.438506  630458 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 13:39:29.448050  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:39:29.475590  630458 start.go:296] duration metric: took 127.478491ms for postStartSetup
	I0520 13:39:29.475640  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:39:29.476001  630458 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0520 13:39:29.476038  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:39:29.478860  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.479313  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:39:29.479343  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.479492  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:39:29.479671  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:39:29.479826  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:39:29.480013  630458 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	W0520 13:39:29.558948  630458 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0520 13:39:29.558985  630458 fix.go:56] duration metric: took 1m31.840129922s for fixHost
	I0520 13:39:29.559041  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:39:29.561848  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.562369  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:39:29.562402  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.562525  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:39:29.562725  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:39:29.562882  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:39:29.563015  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:39:29.563186  630458 main.go:141] libmachine: Using SSH client type: native
	I0520 13:39:29.563466  630458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0520 13:39:29.563486  630458 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 13:39:29.666072  630458 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716212369.648510714
	
	I0520 13:39:29.666102  630458 fix.go:216] guest clock: 1716212369.648510714
	I0520 13:39:29.666109  630458 fix.go:229] Guest: 2024-05-20 13:39:29.648510714 +0000 UTC Remote: 2024-05-20 13:39:29.558995033 +0000 UTC m=+91.979086619 (delta=89.515681ms)
	I0520 13:39:29.666133  630458 fix.go:200] guest clock delta is within tolerance: 89.515681ms
	I0520 13:39:29.666138  630458 start.go:83] releasing machines lock for "ha-170194", held for 1m31.947306351s
	I0520 13:39:29.666167  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:39:29.666487  630458 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:39:29.669259  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.669659  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:39:29.669703  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.669843  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:39:29.670375  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:39:29.670572  630458 main.go:141] libmachine: (ha-170194) Calling .DriverName
	I0520 13:39:29.670658  630458 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 13:39:29.670711  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:39:29.670775  630458 ssh_runner.go:195] Run: cat /version.json
	I0520 13:39:29.670792  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHHostname
	I0520 13:39:29.673471  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.673804  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:39:29.673830  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.674007  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:39:29.674107  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.674274  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:39:29.674444  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:39:29.674560  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:39:29.674584  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:29.674610  630458 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	I0520 13:39:29.674771  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHPort
	I0520 13:39:29.674982  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHKeyPath
	I0520 13:39:29.675135  630458 main.go:141] libmachine: (ha-170194) Calling .GetSSHUsername
	I0520 13:39:29.675297  630458 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/ha-170194/id_rsa Username:docker}
	W0520 13:39:29.790695  630458 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 13:39:29.790829  630458 ssh_runner.go:195] Run: systemctl --version
	I0520 13:39:29.796935  630458 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 13:39:29.956452  630458 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 13:39:29.962243  630458 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 13:39:29.962307  630458 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 13:39:29.970973  630458 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 13:39:29.970997  630458 start.go:494] detecting cgroup driver to use...
	I0520 13:39:29.971070  630458 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 13:39:29.986711  630458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 13:39:30.000020  630458 docker.go:217] disabling cri-docker service (if available) ...
	I0520 13:39:30.000091  630458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 13:39:30.013978  630458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 13:39:30.027709  630458 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 13:39:30.176696  630458 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 13:39:30.316964  630458 docker.go:233] disabling docker service ...
	I0520 13:39:30.317055  630458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 13:39:30.332119  630458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 13:39:30.345096  630458 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 13:39:30.485989  630458 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 13:39:30.629891  630458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 13:39:30.643632  630458 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 13:39:30.661385  630458 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 13:39:30.661444  630458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:39:30.671977  630458 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 13:39:30.672044  630458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:39:30.681566  630458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:39:30.691099  630458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:39:30.700718  630458 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 13:39:30.710318  630458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:39:30.719690  630458 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:39:30.729989  630458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:39:30.739409  630458 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 13:39:30.747863  630458 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 13:39:30.756091  630458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:39:30.901860  630458 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 13:39:31.802038  630458 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 13:39:31.802131  630458 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 13:39:31.807352  630458 start.go:562] Will wait 60s for crictl version
	I0520 13:39:31.807412  630458 ssh_runner.go:195] Run: which crictl
	I0520 13:39:31.811056  630458 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 13:39:31.846648  630458 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 13:39:31.846727  630458 ssh_runner.go:195] Run: crio --version
	I0520 13:39:31.878241  630458 ssh_runner.go:195] Run: crio --version
	I0520 13:39:31.909741  630458 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 13:39:31.911825  630458 main.go:141] libmachine: (ha-170194) Calling .GetIP
	I0520 13:39:31.914716  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:31.915062  630458 main.go:141] libmachine: (ha-170194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:8c:ad", ip: ""} in network mk-ha-170194: {Iface:virbr1 ExpiryTime:2024-05-20 14:28:08 +0000 UTC Type:0 Mac:52:54:00:4b:8c:ad Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-170194 Clientid:01:52:54:00:4b:8c:ad}
	I0520 13:39:31.915093  630458 main.go:141] libmachine: (ha-170194) DBG | domain ha-170194 has defined IP address 192.168.39.92 and MAC address 52:54:00:4b:8c:ad in network mk-ha-170194
	I0520 13:39:31.915291  630458 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 13:39:31.919887  630458 kubeadm.go:877] updating cluster {Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.163 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 13:39:31.920027  630458 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:39:31.920086  630458 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:39:31.964377  630458 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:39:31.964409  630458 crio.go:433] Images already preloaded, skipping extraction
	I0520 13:39:31.964475  630458 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:39:31.998399  630458 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:39:31.998424  630458 cache_images.go:84] Images are preloaded, skipping loading
	I0520 13:39:31.998433  630458 kubeadm.go:928] updating node { 192.168.39.92 8443 v1.30.1 crio true true} ...
	I0520 13:39:31.998544  630458 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-170194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:39:31.998610  630458 ssh_runner.go:195] Run: crio config
	I0520 13:39:32.043875  630458 cni.go:84] Creating CNI manager for ""
	I0520 13:39:32.043895  630458 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0520 13:39:32.043916  630458 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 13:39:32.043963  630458 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-170194 NodeName:ha-170194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 13:39:32.044130  630458 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-170194"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 13:39:32.044148  630458 kube-vip.go:115] generating kube-vip config ...
	I0520 13:39:32.044203  630458 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 13:39:32.055659  630458 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 13:39:32.055771  630458 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 13:39:32.055832  630458 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 13:39:32.065156  630458 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 13:39:32.065221  630458 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0520 13:39:32.075466  630458 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0520 13:39:32.091771  630458 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:39:32.108200  630458 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0520 13:39:32.123784  630458 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 13:39:32.141422  630458 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 13:39:32.145307  630458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:39:32.293077  630458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:39:32.308010  630458 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194 for IP: 192.168.39.92
	I0520 13:39:32.308036  630458 certs.go:194] generating shared ca certs ...
	I0520 13:39:32.308052  630458 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:39:32.308225  630458 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 13:39:32.308281  630458 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 13:39:32.308295  630458 certs.go:256] generating profile certs ...
	I0520 13:39:32.308389  630458 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/client.key
	I0520 13:39:32.308426  630458 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.a5e60bbd
	I0520 13:39:32.308448  630458 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.a5e60bbd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.92 192.168.39.155 192.168.39.3 192.168.39.254]
	I0520 13:39:32.682527  630458 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.a5e60bbd ...
	I0520 13:39:32.682563  630458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.a5e60bbd: {Name:mkfa69fc36ddc2d1a2a6de520d370ba30be7c53f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:39:32.682830  630458 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.a5e60bbd ...
	I0520 13:39:32.682854  630458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.a5e60bbd: {Name:mkd7bcdab272b5fb0c2e8cb0a77afbc9d037a96c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:39:32.682982  630458 certs.go:381] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt.a5e60bbd -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt
	I0520 13:39:32.683295  630458 certs.go:385] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key.a5e60bbd -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key
	I0520 13:39:32.683494  630458 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key
	I0520 13:39:32.683515  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 13:39:32.683533  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 13:39:32.683553  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 13:39:32.683571  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 13:39:32.683587  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 13:39:32.683602  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 13:39:32.683617  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 13:39:32.683635  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 13:39:32.683702  630458 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem (1338 bytes)
	W0520 13:39:32.683748  630458 certs.go:480] ignoring /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867_empty.pem, impossibly tiny 0 bytes
	I0520 13:39:32.683761  630458 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 13:39:32.683843  630458 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 13:39:32.683886  630458 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:39:32.683939  630458 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 13:39:32.683998  630458 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 13:39:32.684038  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /usr/share/ca-certificates/6098672.pem
	I0520 13:39:32.684058  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:39:32.684074  630458 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem -> /usr/share/ca-certificates/609867.pem
	I0520 13:39:32.685366  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:39:32.712559  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:39:32.736326  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:39:32.760562  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 13:39:32.783953  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0520 13:39:32.806831  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 13:39:32.831120  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:39:32.856018  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/ha-170194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 13:39:32.878644  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /usr/share/ca-certificates/6098672.pem (1708 bytes)
	I0520 13:39:32.901558  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:39:32.924050  630458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem --> /usr/share/ca-certificates/609867.pem (1338 bytes)
	I0520 13:39:32.946968  630458 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 13:39:32.962701  630458 ssh_runner.go:195] Run: openssl version
	I0520 13:39:32.968510  630458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6098672.pem && ln -fs /usr/share/ca-certificates/6098672.pem /etc/ssl/certs/6098672.pem"
	I0520 13:39:32.978524  630458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6098672.pem
	I0520 13:39:32.982526  630458 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 13:39:32.982574  630458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6098672.pem
	I0520 13:39:32.987882  630458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6098672.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:39:32.996865  630458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:39:33.007028  630458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:39:33.011176  630458 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:39:33.011249  630458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:39:33.016863  630458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:39:33.025859  630458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/609867.pem && ln -fs /usr/share/ca-certificates/609867.pem /etc/ssl/certs/609867.pem"
	I0520 13:39:33.036148  630458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/609867.pem
	I0520 13:39:33.040434  630458 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 13:39:33.040489  630458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/609867.pem
	I0520 13:39:33.046056  630458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/609867.pem /etc/ssl/certs/51391683.0"
	I0520 13:39:33.055280  630458 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:39:33.059705  630458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 13:39:33.065228  630458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 13:39:33.070629  630458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 13:39:33.076117  630458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 13:39:33.081392  630458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 13:39:33.086659  630458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 13:39:33.091783  630458 kubeadm.go:391] StartCluster: {Name:ha-170194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-170194 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.163 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:39:33.091891  630458 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 13:39:33.091950  630458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 13:39:33.129426  630458 cri.go:89] found id: "288c25b639238858678ff15231bd6cb6719c8101c561b37043af0511d4979d50"
	I0520 13:39:33.129454  630458 cri.go:89] found id: "6e288ee54e1532f81c3d2587067144f5d8d0382f8be5e992c6b6bd7c9cc1de98"
	I0520 13:39:33.129457  630458 cri.go:89] found id: "aec5b752545e8d9abd4d44817bed499e6bef842a475cd12e2a3dee7cadd5e0dc"
	I0520 13:39:33.129460  630458 cri.go:89] found id: "20ef4886be391f5f00d7681fc4012bf67995bc8ecf4e1fae3a30b9cf6ad18f37"
	I0520 13:39:33.129462  630458 cri.go:89] found id: "9ea85179fd0503aa9f2a864a7e621c5f12560217b135dee98c2f4cea9a4e5e59"
	I0520 13:39:33.129466  630458 cri.go:89] found id: "d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4"
	I0520 13:39:33.129468  630458 cri.go:89] found id: "6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583"
	I0520 13:39:33.129470  630458 cri.go:89] found id: "ef86504a6a21868d78111ee02e2dfffac4c0417e767d4026c693a1536b8b19d2"
	I0520 13:39:33.129473  630458 cri.go:89] found id: "2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b"
	I0520 13:39:33.129480  630458 cri.go:89] found id: "334824a1ffd8bfbb3cb38a60c62bafa0a027c3c9f2d464b2aece3440327048f5"
	I0520 13:39:33.129482  630458 cri.go:89] found id: "e40d2be6b414dbb4aff49d9c9270179b8ef07ff0fd8215c35a69afa89c8b9a23"
	I0520 13:39:33.129485  630458 cri.go:89] found id: "bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2"
	I0520 13:39:33.129487  630458 cri.go:89] found id: "d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8"
	I0520 13:39:33.129490  630458 cri.go:89] found id: "b0dc1542ea21a6985bbc8673bdd0d29ddd6774c864aa059ea5d9aa8552ed47fa"
	I0520 13:39:33.129494  630458 cri.go:89] found id: ""
	I0520 13:39:33.129545  630458 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.396586556Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716212697396563824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21f28c7a-ef55-48ba-9026-a6309e07eb24 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.397093948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fafb720f-087c-4d5d-9db2-4306d1b1045c name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.397163875Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fafb720f-087c-4d5d-9db2-4306d1b1045c name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.397614370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45c16067478b03d8b1129f03123a30e9e5f4cffa48fd0f6d8d66a55129c1b931,PodSandboxId:42415cf90d345b4c5dc3f830da6a839d4fe2a1aaa512daa1c502180f3ab3f6c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716212434295133871,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a34f8e509820a2b68edc25834eedaf4e5c409243d83aad4ec3ff3bbcc686713,PodSandboxId:6617e85a79be145af1b61be50f18db7a10820a004e3e4aac973dd59152709627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716212421269583785,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7785a144a0e759050cfe76f22440222e2f754cdf34117eb8599199c8ccb715d,PodSandboxId:4b2e01ae01a57f7f5014f51fcf29fbbc1af5773ecfdf9c5069cce3247f592d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716212417262746135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87196f06f4196e74137e462f5db1f7978fa34308ac81955b53ec21931c310ed9,PodSandboxId:0376397fdcb9af924b0ef345b957589fa7d38c5d8a91887203ad0fbab61bc0e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716212410642562490,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kubernetes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b195802788ab566858825c71059df3150db2401c9d5946c65949cdb2c9e71a42,PodSandboxId:6118c63443924c19612b1f19f1f5d0a31f467cbc5929b1f23ce7c3715f8bc541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716212409687160547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87fa4ca39ac2878206e0c3551663210114f86ef96fa4b960d288bfc2027a359e,PodSandboxId:d1314f832bf2703533be40178d4c918e9f4edfbe5a568c1134a28a96ee26e919,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716212393172309394,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36b1022881be70389907328afc31c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3df09e40b72acd0754deff4f383a3f3ea5658277934b59145e3842be2182ce97,PodSandboxId:c0b19b98fe1f2f1d27ec08377df3877d3f19f110fbccf473850ead651aff623e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716212377684956955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:3ad4de192a107b941a2ab04025b0ddb05103fa49eeb708dc985d66b444e4ed86,PodSandboxId:6617e85a79be145af1b61be50f18db7a10820a004e3e4aac973dd59152709627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716212377489696321,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6720f2ab
5ded75d5497c5f267a5e6ce23c07eaab7d1c37a699eff5e21a187d99,PodSandboxId:42415cf90d345b4c5dc3f830da6a839d4fe2a1aaa512daa1c502180f3ab3f6c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716212377336204646,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c64aa00ef2d64c3757c1d4fc0eeb94d8397d4a8fc0
11a7aeccb9edb1566b91b,PodSandboxId:3bb9df5ac1214f20dffbcd948ee4e3ff41da52a2ad3e75645d2f7f9e96f02d80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716212377245682095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:749db6e85ef414586fab40386b9a1d55edbbf2139eee1a9abbd8cdd389ce28f9
,PodSandboxId:49b4ed5fa5d93c1354b6af5fe16b198d680298c826bfdbf62001678e3cf5f847,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716212377239099221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-7969-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2418473c6764f6381282842c571770e8b7d6ccd91a111236466f027ac5c3439,PodSandboxId:e74fd0af7a898956b42b42d530c2e127235247b873ce9361b3b1cc6afa020d48,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716212377159023879,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca3e13e017f2fa2c5625fcea967f30ed81a1fb98e926f9322dea1d8c9cf887e8,PodSandboxId:4b2e01ae01a57f7f5014f51fcf29fbbc1af5773ecfdf9c5069cce3247f592d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716212377155045792,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b
0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49de9de51e3a448dc283be641c93a8ce95a1a415fb7865e3759421c56fefddd4,PodSandboxId:6118c63443924c19612b1f19f1f5d0a31f467cbc5929b1f23ce7c3715f8bc541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716212376992412017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495
ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:718c55ec406ad49e98fb24805d401e94b9e216b4eb4a9377f977f2aea7059028,PodSandboxId:254b523bd071201d4d40cb0f8b3c4ed66d3399fcafb509759840a97acca1fb7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716212377037481888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kuber
netes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf740d9b5f06de9c1c7f18ff96ae4e526c6be4ceefd003d80cbf78d2a4a8de2e,PodSandboxId:85c1015ea36da8c46f104ff7ec479b35768c9d5a0cfc6b141e2355692dadb1b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716211886372304954,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kuberne
tes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4,PodSandboxId:cb6f21c242e20885a848e6e54b17b052b76485170e8d42e9306861f8b60b9773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211729633877124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583,PodSandboxId:901f35680bee52a4fdcd71f44cbdd95bb19dfa886f3ea6b532d85fee30bffbe9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211729610237371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-7969-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b,PodSandboxId:ef9cc40406ad74740df8eabc268ef219b5ddbbd891b2bbcb11b6cb63e0559867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211727429893188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2,PodSandboxId:0a5e941c6740dcbed3cd2281373ada1e41c0ae9be65f6fcbdf489717a76cdcf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211708079043565,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8,PodSandboxId:1a02a71cebea30837cb4dfa8906ebe6d049ffc18862ce9371a6c5f6de47e2edf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1716211708025464643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fafb720f-087c-4d5d-9db2-4306d1b1045c name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.438484186Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26ef8da5-a49d-4e00-81a1-bd732677f4ad name=/runtime.v1.RuntimeService/Version
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.438571633Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26ef8da5-a49d-4e00-81a1-bd732677f4ad name=/runtime.v1.RuntimeService/Version
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.440111418Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a096243-bedf-4d52-b755-2d4e8eb49d8d name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.440645606Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716212697440618864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a096243-bedf-4d52-b755-2d4e8eb49d8d name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.441221773Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82bb84d6-0111-46b8-bc13-1f0d310c3aeb name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.441299341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82bb84d6-0111-46b8-bc13-1f0d310c3aeb name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.441702306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45c16067478b03d8b1129f03123a30e9e5f4cffa48fd0f6d8d66a55129c1b931,PodSandboxId:42415cf90d345b4c5dc3f830da6a839d4fe2a1aaa512daa1c502180f3ab3f6c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716212434295133871,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a34f8e509820a2b68edc25834eedaf4e5c409243d83aad4ec3ff3bbcc686713,PodSandboxId:6617e85a79be145af1b61be50f18db7a10820a004e3e4aac973dd59152709627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716212421269583785,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7785a144a0e759050cfe76f22440222e2f754cdf34117eb8599199c8ccb715d,PodSandboxId:4b2e01ae01a57f7f5014f51fcf29fbbc1af5773ecfdf9c5069cce3247f592d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716212417262746135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87196f06f4196e74137e462f5db1f7978fa34308ac81955b53ec21931c310ed9,PodSandboxId:0376397fdcb9af924b0ef345b957589fa7d38c5d8a91887203ad0fbab61bc0e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716212410642562490,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kubernetes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b195802788ab566858825c71059df3150db2401c9d5946c65949cdb2c9e71a42,PodSandboxId:6118c63443924c19612b1f19f1f5d0a31f467cbc5929b1f23ce7c3715f8bc541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716212409687160547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87fa4ca39ac2878206e0c3551663210114f86ef96fa4b960d288bfc2027a359e,PodSandboxId:d1314f832bf2703533be40178d4c918e9f4edfbe5a568c1134a28a96ee26e919,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716212393172309394,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36b1022881be70389907328afc31c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3df09e40b72acd0754deff4f383a3f3ea5658277934b59145e3842be2182ce97,PodSandboxId:c0b19b98fe1f2f1d27ec08377df3877d3f19f110fbccf473850ead651aff623e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716212377684956955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:3ad4de192a107b941a2ab04025b0ddb05103fa49eeb708dc985d66b444e4ed86,PodSandboxId:6617e85a79be145af1b61be50f18db7a10820a004e3e4aac973dd59152709627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716212377489696321,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6720f2ab
5ded75d5497c5f267a5e6ce23c07eaab7d1c37a699eff5e21a187d99,PodSandboxId:42415cf90d345b4c5dc3f830da6a839d4fe2a1aaa512daa1c502180f3ab3f6c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716212377336204646,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c64aa00ef2d64c3757c1d4fc0eeb94d8397d4a8fc0
11a7aeccb9edb1566b91b,PodSandboxId:3bb9df5ac1214f20dffbcd948ee4e3ff41da52a2ad3e75645d2f7f9e96f02d80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716212377245682095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:749db6e85ef414586fab40386b9a1d55edbbf2139eee1a9abbd8cdd389ce28f9
,PodSandboxId:49b4ed5fa5d93c1354b6af5fe16b198d680298c826bfdbf62001678e3cf5f847,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716212377239099221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-7969-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2418473c6764f6381282842c571770e8b7d6ccd91a111236466f027ac5c3439,PodSandboxId:e74fd0af7a898956b42b42d530c2e127235247b873ce9361b3b1cc6afa020d48,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716212377159023879,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca3e13e017f2fa2c5625fcea967f30ed81a1fb98e926f9322dea1d8c9cf887e8,PodSandboxId:4b2e01ae01a57f7f5014f51fcf29fbbc1af5773ecfdf9c5069cce3247f592d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716212377155045792,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b
0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49de9de51e3a448dc283be641c93a8ce95a1a415fb7865e3759421c56fefddd4,PodSandboxId:6118c63443924c19612b1f19f1f5d0a31f467cbc5929b1f23ce7c3715f8bc541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716212376992412017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495
ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:718c55ec406ad49e98fb24805d401e94b9e216b4eb4a9377f977f2aea7059028,PodSandboxId:254b523bd071201d4d40cb0f8b3c4ed66d3399fcafb509759840a97acca1fb7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716212377037481888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kuber
netes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf740d9b5f06de9c1c7f18ff96ae4e526c6be4ceefd003d80cbf78d2a4a8de2e,PodSandboxId:85c1015ea36da8c46f104ff7ec479b35768c9d5a0cfc6b141e2355692dadb1b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716211886372304954,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kuberne
tes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4,PodSandboxId:cb6f21c242e20885a848e6e54b17b052b76485170e8d42e9306861f8b60b9773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211729633877124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583,PodSandboxId:901f35680bee52a4fdcd71f44cbdd95bb19dfa886f3ea6b532d85fee30bffbe9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211729610237371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-7969-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b,PodSandboxId:ef9cc40406ad74740df8eabc268ef219b5ddbbd891b2bbcb11b6cb63e0559867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211727429893188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2,PodSandboxId:0a5e941c6740dcbed3cd2281373ada1e41c0ae9be65f6fcbdf489717a76cdcf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211708079043565,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8,PodSandboxId:1a02a71cebea30837cb4dfa8906ebe6d049ffc18862ce9371a6c5f6de47e2edf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1716211708025464643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82bb84d6-0111-46b8-bc13-1f0d310c3aeb name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.482578962Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e450a5e-4299-4b52-99ea-8ad9ad2104ea name=/runtime.v1.RuntimeService/Version
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.482664477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e450a5e-4299-4b52-99ea-8ad9ad2104ea name=/runtime.v1.RuntimeService/Version
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.484155782Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ef4ba70-8926-49a2-8293-374f176b0ca9 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.484577208Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716212697484554398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ef4ba70-8926-49a2-8293-374f176b0ca9 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.485169154Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8432af35-ad34-4487-9421-aa19a500b912 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.485223030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8432af35-ad34-4487-9421-aa19a500b912 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.485644498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45c16067478b03d8b1129f03123a30e9e5f4cffa48fd0f6d8d66a55129c1b931,PodSandboxId:42415cf90d345b4c5dc3f830da6a839d4fe2a1aaa512daa1c502180f3ab3f6c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716212434295133871,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a34f8e509820a2b68edc25834eedaf4e5c409243d83aad4ec3ff3bbcc686713,PodSandboxId:6617e85a79be145af1b61be50f18db7a10820a004e3e4aac973dd59152709627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716212421269583785,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7785a144a0e759050cfe76f22440222e2f754cdf34117eb8599199c8ccb715d,PodSandboxId:4b2e01ae01a57f7f5014f51fcf29fbbc1af5773ecfdf9c5069cce3247f592d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716212417262746135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87196f06f4196e74137e462f5db1f7978fa34308ac81955b53ec21931c310ed9,PodSandboxId:0376397fdcb9af924b0ef345b957589fa7d38c5d8a91887203ad0fbab61bc0e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716212410642562490,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kubernetes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b195802788ab566858825c71059df3150db2401c9d5946c65949cdb2c9e71a42,PodSandboxId:6118c63443924c19612b1f19f1f5d0a31f467cbc5929b1f23ce7c3715f8bc541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716212409687160547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87fa4ca39ac2878206e0c3551663210114f86ef96fa4b960d288bfc2027a359e,PodSandboxId:d1314f832bf2703533be40178d4c918e9f4edfbe5a568c1134a28a96ee26e919,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716212393172309394,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36b1022881be70389907328afc31c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3df09e40b72acd0754deff4f383a3f3ea5658277934b59145e3842be2182ce97,PodSandboxId:c0b19b98fe1f2f1d27ec08377df3877d3f19f110fbccf473850ead651aff623e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716212377684956955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:3ad4de192a107b941a2ab04025b0ddb05103fa49eeb708dc985d66b444e4ed86,PodSandboxId:6617e85a79be145af1b61be50f18db7a10820a004e3e4aac973dd59152709627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716212377489696321,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6720f2ab
5ded75d5497c5f267a5e6ce23c07eaab7d1c37a699eff5e21a187d99,PodSandboxId:42415cf90d345b4c5dc3f830da6a839d4fe2a1aaa512daa1c502180f3ab3f6c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716212377336204646,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c64aa00ef2d64c3757c1d4fc0eeb94d8397d4a8fc0
11a7aeccb9edb1566b91b,PodSandboxId:3bb9df5ac1214f20dffbcd948ee4e3ff41da52a2ad3e75645d2f7f9e96f02d80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716212377245682095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:749db6e85ef414586fab40386b9a1d55edbbf2139eee1a9abbd8cdd389ce28f9
,PodSandboxId:49b4ed5fa5d93c1354b6af5fe16b198d680298c826bfdbf62001678e3cf5f847,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716212377239099221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-7969-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2418473c6764f6381282842c571770e8b7d6ccd91a111236466f027ac5c3439,PodSandboxId:e74fd0af7a898956b42b42d530c2e127235247b873ce9361b3b1cc6afa020d48,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716212377159023879,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca3e13e017f2fa2c5625fcea967f30ed81a1fb98e926f9322dea1d8c9cf887e8,PodSandboxId:4b2e01ae01a57f7f5014f51fcf29fbbc1af5773ecfdf9c5069cce3247f592d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716212377155045792,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b
0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49de9de51e3a448dc283be641c93a8ce95a1a415fb7865e3759421c56fefddd4,PodSandboxId:6118c63443924c19612b1f19f1f5d0a31f467cbc5929b1f23ce7c3715f8bc541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716212376992412017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495
ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:718c55ec406ad49e98fb24805d401e94b9e216b4eb4a9377f977f2aea7059028,PodSandboxId:254b523bd071201d4d40cb0f8b3c4ed66d3399fcafb509759840a97acca1fb7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716212377037481888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kuber
netes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf740d9b5f06de9c1c7f18ff96ae4e526c6be4ceefd003d80cbf78d2a4a8de2e,PodSandboxId:85c1015ea36da8c46f104ff7ec479b35768c9d5a0cfc6b141e2355692dadb1b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716211886372304954,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kuberne
tes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4,PodSandboxId:cb6f21c242e20885a848e6e54b17b052b76485170e8d42e9306861f8b60b9773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211729633877124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583,PodSandboxId:901f35680bee52a4fdcd71f44cbdd95bb19dfa886f3ea6b532d85fee30bffbe9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211729610237371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-7969-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b,PodSandboxId:ef9cc40406ad74740df8eabc268ef219b5ddbbd891b2bbcb11b6cb63e0559867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211727429893188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2,PodSandboxId:0a5e941c6740dcbed3cd2281373ada1e41c0ae9be65f6fcbdf489717a76cdcf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211708079043565,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8,PodSandboxId:1a02a71cebea30837cb4dfa8906ebe6d049ffc18862ce9371a6c5f6de47e2edf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1716211708025464643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8432af35-ad34-4487-9421-aa19a500b912 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.528032915Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=79b8266c-b17b-4b5d-9ee6-4568e32219d5 name=/runtime.v1.RuntimeService/Version
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.528119440Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=79b8266c-b17b-4b5d-9ee6-4568e32219d5 name=/runtime.v1.RuntimeService/Version
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.528983530Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bceec51b-8961-47aa-a78a-bdf584a5f29e name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.529454933Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716212697529431214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bceec51b-8961-47aa-a78a-bdf584a5f29e name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.529968367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b19664d-dce2-4654-a5ff-6dceee34b0e7 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.530042059Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b19664d-dce2-4654-a5ff-6dceee34b0e7 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:44:57 ha-170194 crio[3762]: time="2024-05-20 13:44:57.530459510Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45c16067478b03d8b1129f03123a30e9e5f4cffa48fd0f6d8d66a55129c1b931,PodSandboxId:42415cf90d345b4c5dc3f830da6a839d4fe2a1aaa512daa1c502180f3ab3f6c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716212434295133871,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a34f8e509820a2b68edc25834eedaf4e5c409243d83aad4ec3ff3bbcc686713,PodSandboxId:6617e85a79be145af1b61be50f18db7a10820a004e3e4aac973dd59152709627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716212421269583785,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7785a144a0e759050cfe76f22440222e2f754cdf34117eb8599199c8ccb715d,PodSandboxId:4b2e01ae01a57f7f5014f51fcf29fbbc1af5773ecfdf9c5069cce3247f592d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716212417262746135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87196f06f4196e74137e462f5db1f7978fa34308ac81955b53ec21931c310ed9,PodSandboxId:0376397fdcb9af924b0ef345b957589fa7d38c5d8a91887203ad0fbab61bc0e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716212410642562490,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kubernetes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b195802788ab566858825c71059df3150db2401c9d5946c65949cdb2c9e71a42,PodSandboxId:6118c63443924c19612b1f19f1f5d0a31f467cbc5929b1f23ce7c3715f8bc541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716212409687160547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87fa4ca39ac2878206e0c3551663210114f86ef96fa4b960d288bfc2027a359e,PodSandboxId:d1314f832bf2703533be40178d4c918e9f4edfbe5a568c1134a28a96ee26e919,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716212393172309394,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36b1022881be70389907328afc31c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3df09e40b72acd0754deff4f383a3f3ea5658277934b59145e3842be2182ce97,PodSandboxId:c0b19b98fe1f2f1d27ec08377df3877d3f19f110fbccf473850ead651aff623e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716212377684956955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:3ad4de192a107b941a2ab04025b0ddb05103fa49eeb708dc985d66b444e4ed86,PodSandboxId:6617e85a79be145af1b61be50f18db7a10820a004e3e4aac973dd59152709627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716212377489696321,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0e094e-1f65-407a-9fc0-a3c55f9de344,},Annotations:map[string]string{io.kubernetes.container.hash: a861fca0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6720f2ab
5ded75d5497c5f267a5e6ce23c07eaab7d1c37a699eff5e21a187d99,PodSandboxId:42415cf90d345b4c5dc3f830da6a839d4fe2a1aaa512daa1c502180f3ab3f6c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716212377336204646,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cmd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44545bda-e29b-44d6-97f7-45290fda6e37,},Annotations:map[string]string{io.kubernetes.container.hash: 6cd0969c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c64aa00ef2d64c3757c1d4fc0eeb94d8397d4a8fc0
11a7aeccb9edb1566b91b,PodSandboxId:3bb9df5ac1214f20dffbcd948ee4e3ff41da52a2ad3e75645d2f7f9e96f02d80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716212377245682095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:749db6e85ef414586fab40386b9a1d55edbbf2139eee1a9abbd8cdd389ce28f9
,PodSandboxId:49b4ed5fa5d93c1354b6af5fe16b198d680298c826bfdbf62001678e3cf5f847,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716212377239099221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-7969-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2418473c6764f6381282842c571770e8b7d6ccd91a111236466f027ac5c3439,PodSandboxId:e74fd0af7a898956b42b42d530c2e127235247b873ce9361b3b1cc6afa020d48,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716212377159023879,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca3e13e017f2fa2c5625fcea967f30ed81a1fb98e926f9322dea1d8c9cf887e8,PodSandboxId:4b2e01ae01a57f7f5014f51fcf29fbbc1af5773ecfdf9c5069cce3247f592d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716212377155045792,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61864ede3229362b45cbdcfb69b
0adfa,},Annotations:map[string]string{io.kubernetes.container.hash: a822595c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49de9de51e3a448dc283be641c93a8ce95a1a415fb7865e3759421c56fefddd4,PodSandboxId:6118c63443924c19612b1f19f1f5d0a31f467cbc5929b1f23ce7c3715f8bc541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716212376992412017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e70fae2874e36215d4c495
ae742101cd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:718c55ec406ad49e98fb24805d401e94b9e216b4eb4a9377f977f2aea7059028,PodSandboxId:254b523bd071201d4d40cb0f8b3c4ed66d3399fcafb509759840a97acca1fb7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716212377037481888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kuber
netes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf740d9b5f06de9c1c7f18ff96ae4e526c6be4ceefd003d80cbf78d2a4a8de2e,PodSandboxId:85c1015ea36da8c46f104ff7ec479b35768c9d5a0cfc6b141e2355692dadb1b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716211886372304954,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kn5pb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc78b16d-ff4a-4bb6-9a1e-62f31641b442,},Annotations:map[string]string{io.kuberne
tes.container.hash: f5818ca9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4,PodSandboxId:cb6f21c242e20885a848e6e54b17b052b76485170e8d42e9306861f8b60b9773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211729633877124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vk78q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 334eb0ed-c771-4840-92d1-04c1b9ec5179,},Annotations:map[string]string{io.kubernetes.container.hash: 90e369e1,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583,PodSandboxId:901f35680bee52a4fdcd71f44cbdd95bb19dfa886f3ea6b532d85fee30bffbe9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211729610237371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-s28r6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b161e3ee-7969-4861-9778-3bc34356d792,},Annotations:map[string]string{io.kubernetes.container.hash: 3a7aa351,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b,PodSandboxId:ef9cc40406ad74740df8eabc268ef219b5ddbbd891b2bbcb11b6cb63e0559867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211727429893188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qth8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43fd92-69c8-419e-9f78-0b5d489b561a,},Annotations:map[string]string{io.kubernetes.container.hash: 95826738,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2,PodSandboxId:0a5e941c6740dcbed3cd2281373ada1e41c0ae9be65f6fcbdf489717a76cdcf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211708079043565,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04ac17edd15cb620b07cb0f76241c8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 77d850f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8,PodSandboxId:1a02a71cebea30837cb4dfa8906ebe6d049ffc18862ce9371a6c5f6de47e2edf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1716211708025464643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-170194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f01c40c0bc6fcf2ee8daf696630206e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b19664d-dce2-4654-a5ff-6dceee34b0e7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	45c16067478b0       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               3                   42415cf90d345       kindnet-cmd8x
	7a34f8e509820       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       3                   6617e85a79be1       storage-provisioner
	f7785a144a0e7       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      4 minutes ago       Running             kube-apiserver            3                   4b2e01ae01a57       kube-apiserver-ha-170194
	87196f06f4196       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   0376397fdcb9a       busybox-fc5497c4f-kn5pb
	b195802788ab5       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      4 minutes ago       Running             kube-controller-manager   2                   6118c63443924       kube-controller-manager-ha-170194
	87fa4ca39ac28       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   d1314f832bf27       kube-vip-ha-170194
	3df09e40b72ac       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      5 minutes ago       Running             kube-proxy                1                   c0b19b98fe1f2       kube-proxy-qth8f
	3ad4de192a107       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       2                   6617e85a79be1       storage-provisioner
	6720f2ab5ded7       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   42415cf90d345       kindnet-cmd8x
	5c64aa00ef2d6       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      5 minutes ago       Running             kube-scheduler            1                   3bb9df5ac1214       kube-scheduler-ha-170194
	749db6e85ef41       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   49b4ed5fa5d93       coredns-7db6d8ff4d-s28r6
	f2418473c6764       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   e74fd0af7a898       coredns-7db6d8ff4d-vk78q
	ca3e13e017f2f       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      5 minutes ago       Exited              kube-apiserver            2                   4b2e01ae01a57       kube-apiserver-ha-170194
	718c55ec406ad       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   254b523bd0712       etcd-ha-170194
	49de9de51e3a4       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      5 minutes ago       Exited              kube-controller-manager   1                   6118c63443924       kube-controller-manager-ha-170194
	cf740d9b5f06d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   85c1015ea36da       busybox-fc5497c4f-kn5pb
	d3c1362d9012c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   cb6f21c242e20       coredns-7db6d8ff4d-vk78q
	6bd28e2e55305       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   901f35680bee5       coredns-7db6d8ff4d-s28r6
	2ca782f6be5aa       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      16 minutes ago      Exited              kube-proxy                0                   ef9cc40406ad7       kube-proxy-qth8f
	bd7f5eac64d8e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   0a5e941c6740d       etcd-ha-170194
	d125c402bd4cb       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      16 minutes ago      Exited              kube-scheduler            0                   1a02a71cebea3       kube-scheduler-ha-170194
	
	
	==> coredns [6bd28e2e55305f39b0bbf2c0bed91e8c377623c8914558d8e7cfc97d1ef48583] <==
	[INFO] 10.244.0.4:34499 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153438s
	[INFO] 10.244.0.4:47635 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003467859s
	[INFO] 10.244.0.4:37386 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211396s
	[INFO] 10.244.0.4:37274 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116452s
	[INFO] 10.244.1.2:33488 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156093s
	[INFO] 10.244.1.2:44452 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130005s
	[INFO] 10.244.2.2:54953 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000216728s
	[INFO] 10.244.2.2:41118 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098892s
	[INFO] 10.244.0.4:52970 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000086695s
	[INFO] 10.244.0.4:33272 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104087s
	[INFO] 10.244.0.4:47074 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061643s
	[INFO] 10.244.1.2:46181 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125314s
	[INFO] 10.244.1.2:60651 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114425s
	[INFO] 10.244.2.2:39831 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092598s
	[INFO] 10.244.2.2:36745 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009346s
	[INFO] 10.244.0.4:58943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126961s
	[INFO] 10.244.0.4:51569 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093816s
	[INFO] 10.244.0.4:33771 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095037s
	[INFO] 10.244.1.2:51959 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152608s
	[INFO] 10.244.2.2:41273 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085919s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [749db6e85ef414586fab40386b9a1d55edbbf2139eee1a9abbd8cdd389ce28f9] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:38032->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:38032->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d3c1362d9012c00f6e0f6eeb8ccee8e93e8ca886cca260212707cbaa58e1f1c4] <==
	[INFO] 10.244.1.2:39465 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205731s
	[INFO] 10.244.1.2:48674 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104027s
	[INFO] 10.244.1.2:42811 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001662979s
	[INFO] 10.244.1.2:55637 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155358s
	[INFO] 10.244.1.2:34282 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105391s
	[INFO] 10.244.2.2:55675 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129728s
	[INFO] 10.244.2.2:33579 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001845622s
	[INFO] 10.244.2.2:38991 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087704s
	[INFO] 10.244.2.2:60832 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001368991s
	[INFO] 10.244.2.2:49213 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064756s
	[INFO] 10.244.2.2:54664 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073817s
	[INFO] 10.244.0.4:58834 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096728s
	[INFO] 10.244.1.2:58412 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081949s
	[INFO] 10.244.1.2:52492 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085342s
	[INFO] 10.244.2.2:34598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011731s
	[INFO] 10.244.2.2:59375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131389s
	[INFO] 10.244.0.4:33373 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000185564s
	[INFO] 10.244.1.2:38899 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131605s
	[INFO] 10.244.1.2:39420 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000251117s
	[INFO] 10.244.1.2:39569 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000142225s
	[INFO] 10.244.2.2:33399 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185075s
	[INFO] 10.244.2.2:48490 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100278s
	[INFO] 10.244.2.2:35988 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115036s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f2418473c6764f6381282842c571770e8b7d6ccd91a111236466f027ac5c3439] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57904->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57904->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:57906->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:57906->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-170194
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170194
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=ha-170194
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T13_28_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:28:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170194
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:44:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:40:20 +0000   Mon, 20 May 2024 13:28:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:40:20 +0000   Mon, 20 May 2024 13:28:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:40:20 +0000   Mon, 20 May 2024 13:28:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:40:20 +0000   Mon, 20 May 2024 13:28:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-170194
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c0123e982bf4840b6eb6a3f175c7438
	  System UUID:                4c0123e9-82bf-4840-b6eb-6a3f175c7438
	  Boot ID:                    37123cd6-de29-4d66-9faf-c58bcb2e7628
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kn5pb              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-s28r6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-vk78q             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-170194                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-cmd8x                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-170194             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-170194    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-qth8f                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-170194             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-170194                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 16m    kube-proxy       
	  Normal   Starting                 4m34s  kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m    kubelet          Node ha-170194 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m    kubelet          Node ha-170194 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m    kubelet          Node ha-170194 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m    node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	  Normal   NodeReady                16m    kubelet          Node ha-170194 status is now: NodeReady
	  Normal   RegisteredNode           14m    node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	  Normal   RegisteredNode           13m    node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	  Warning  ContainerGCFailed        6m23s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m30s  node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	  Normal   RegisteredNode           4m26s  node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	  Normal   RegisteredNode           3m6s   node-controller  Node ha-170194 event: Registered Node ha-170194 in Controller
	
	
	Name:               ha-170194-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170194-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=ha-170194
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T13_29_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:29:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170194-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:44:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:43:20 +0000   Mon, 20 May 2024 13:43:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:43:20 +0000   Mon, 20 May 2024 13:43:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:43:20 +0000   Mon, 20 May 2024 13:43:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:43:20 +0000   Mon, 20 May 2024 13:43:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.155
	  Hostname:    ha-170194-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dcdee518e92c4c0ba5f3ba763f746ea2
	  System UUID:                dcdee518-e92c-4c0b-a5f3-ba763f746ea2
	  Boot ID:                    f9827b30-252d-42f7-b6ca-2b6b5d85ff27
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tmq2s                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-170194-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-5mg44                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-170194-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-170194-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-7ncvb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-170194-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-170194-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 15m                  kube-proxy       
	  Normal  Starting                 4m20s                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)    kubelet          Node ha-170194-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)    kubelet          Node ha-170194-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)    kubelet          Node ha-170194-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           15m                  node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  RegisteredNode           14m                  node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  RegisteredNode           13m                  node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  NodeNotReady             11m                  node-controller  Node ha-170194-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    5m3s (x8 over 5m3s)  kubelet          Node ha-170194-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 5m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m3s (x8 over 5m3s)  kubelet          Node ha-170194-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m3s (x7 over 5m3s)  kubelet          Node ha-170194-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m30s                node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  RegisteredNode           4m26s                node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-170194-m02 event: Registered Node ha-170194-m02 in Controller
	  Normal  NodeNotReady             111s                 node-controller  Node ha-170194-m02 status is now: NodeNotReady
	
	
	Name:               ha-170194-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170194-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=ha-170194
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T13_31_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:31:58 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170194-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:42:30 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 20 May 2024 13:42:09 +0000   Mon, 20 May 2024 13:43:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 20 May 2024 13:42:09 +0000   Mon, 20 May 2024 13:43:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 20 May 2024 13:42:09 +0000   Mon, 20 May 2024 13:43:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 20 May 2024 13:42:09 +0000   Mon, 20 May 2024 13:43:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.163
	  Hostname:    ha-170194-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 04786f3c085342e689c4ca279f442854
	  System UUID:                04786f3c-0853-42e6-89c4-ca279f442854
	  Boot ID:                    57cf420c-75d0-4b86-a49d-04839c715bec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2wfmq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-98pk9              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-52pf8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal   NodeHasSufficientMemory  13m (x3 over 13m)      kubelet          Node ha-170194-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x3 over 13m)      kubelet          Node ha-170194-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x3 over 13m)      kubelet          Node ha-170194-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-170194-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m31s                  node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal   RegisteredNode           4m27s                  node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal   NodeNotReady             3m51s                  node-controller  Node ha-170194-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m7s                   node-controller  Node ha-170194-m04 event: Registered Node ha-170194-m04 in Controller
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m49s (x2 over 2m49s)  kubelet          Node ha-170194-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x2 over 2m49s)  kubelet          Node ha-170194-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x2 over 2m49s)  kubelet          Node ha-170194-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m49s                  kubelet          Node ha-170194-m04 has been rebooted, boot id: 57cf420c-75d0-4b86-a49d-04839c715bec
	  Normal   NodeReady                2m49s                  kubelet          Node ha-170194-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s                   node-controller  Node ha-170194-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.658967] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.056188] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056574] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.149929] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.138520] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.255022] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +3.918021] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.231733] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.055898] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.968265] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.072694] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.206801] kauditd_printk_skb: 21 callbacks suppressed
	[May20 13:29] kauditd_printk_skb: 74 callbacks suppressed
	[May20 13:39] systemd-fstab-generator[3681]: Ignoring "noauto" option for root device
	[  +0.147840] systemd-fstab-generator[3693]: Ignoring "noauto" option for root device
	[  +0.169526] systemd-fstab-generator[3707]: Ignoring "noauto" option for root device
	[  +0.139500] systemd-fstab-generator[3719]: Ignoring "noauto" option for root device
	[  +0.264446] systemd-fstab-generator[3747]: Ignoring "noauto" option for root device
	[  +1.391053] systemd-fstab-generator[3848]: Ignoring "noauto" option for root device
	[  +4.644038] kauditd_printk_skb: 126 callbacks suppressed
	[ +16.302134] kauditd_printk_skb: 82 callbacks suppressed
	[  +5.618456] kauditd_printk_skb: 1 callbacks suppressed
	[May20 13:40] kauditd_printk_skb: 6 callbacks suppressed
	[ +32.200348] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [718c55ec406ad49e98fb24805d401e94b9e216b4eb4a9377f977f2aea7059028] <==
	{"level":"info","ts":"2024-05-20T13:41:35.133593Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:41:35.144372Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:41:35.144701Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:41:35.164807Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d468df581a6d993d","to":"967c73ca63f4755d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-05-20T13:41:35.164877Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:41:35.194495Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d468df581a6d993d","to":"967c73ca63f4755d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-05-20T13:41:35.194578Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"warn","ts":"2024-05-20T13:41:45.961984Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.14535ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-05-20T13:41:45.962246Z","caller":"traceutil/trace.go:171","msg":"trace[2131067986] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:0; response_revision:2423; }","duration":"101.546067ms","start":"2024-05-20T13:41:45.860655Z","end":"2024-05-20T13:41:45.962202Z","steps":["trace[2131067986] 'count revisions from in-memory index tree'  (duration: 99.852001ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T13:42:23.844494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d468df581a6d993d switched to configuration voters=(11029178436930086447 15305728903112137021)"}
	{"level":"info","ts":"2024-05-20T13:42:23.847313Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"f0381c3cc77c8c9d","local-member-id":"d468df581a6d993d","removed-remote-peer-id":"967c73ca63f4755d","removed-remote-peer-urls":["https://192.168.39.3:2380"]}
	{"level":"info","ts":"2024-05-20T13:42:23.847462Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"967c73ca63f4755d"}
	{"level":"warn","ts":"2024-05-20T13:42:23.847998Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:42:23.848042Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"967c73ca63f4755d"}
	{"level":"warn","ts":"2024-05-20T13:42:23.852293Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:42:23.852323Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:42:23.852373Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"warn","ts":"2024-05-20T13:42:23.852546Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d","error":"context canceled"}
	{"level":"warn","ts":"2024-05-20T13:42:23.852612Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"967c73ca63f4755d","error":"failed to read 967c73ca63f4755d on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-05-20T13:42:23.852656Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"warn","ts":"2024-05-20T13:42:23.85282Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d","error":"context canceled"}
	{"level":"info","ts":"2024-05-20T13:42:23.85287Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:42:23.852888Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:42:23.852904Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"d468df581a6d993d","removed-remote-peer-id":"967c73ca63f4755d"}
	{"level":"warn","ts":"2024-05-20T13:42:23.867487Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.3:34552","server-name":"","error":"EOF"}
	
	
	==> etcd [bd7f5eac64d8e46fe70fd23f9dd2c0ede611d35145567cf7de22c0c03a4592d2] <==
	{"level":"info","ts":"2024-05-20T13:37:58.637559Z","caller":"traceutil/trace.go:171","msg":"trace[529815512] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; }","duration":"808.865735ms","start":"2024-05-20T13:37:57.828687Z","end":"2024-05-20T13:37:58.637553Z","steps":["trace[529815512] 'agreement among raft nodes before linearized reading'  (duration: 783.1836ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T13:37:58.637602Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T13:37:57.828675Z","time spent":"808.919041ms","remote":"127.0.0.1:54396","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":0,"response size":0,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" limit:500 "}
	2024/05/20 13:37:58 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-20T13:37:58.611888Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"676.653642ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-05-20T13:37:58.637827Z","caller":"traceutil/trace.go:171","msg":"trace[955076974] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; }","duration":"702.604633ms","start":"2024-05-20T13:37:57.935215Z","end":"2024-05-20T13:37:58.637819Z","steps":["trace[955076974] 'agreement among raft nodes before linearized reading'  (duration: 676.669951ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T13:37:58.637867Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T13:37:57.935185Z","time spent":"702.672493ms","remote":"127.0.0.1:54346","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:500 "}
	2024/05/20 13:37:58 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-05-20T13:37:58.684851Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"d468df581a6d993d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-20T13:37:58.685249Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"990f835a719da62f"}
	{"level":"info","ts":"2024-05-20T13:37:58.685308Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"990f835a719da62f"}
	{"level":"info","ts":"2024-05-20T13:37:58.685363Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"990f835a719da62f"}
	{"level":"info","ts":"2024-05-20T13:37:58.6855Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d468df581a6d993d","remote-peer-id":"990f835a719da62f"}
	{"level":"info","ts":"2024-05-20T13:37:58.68556Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d468df581a6d993d","remote-peer-id":"990f835a719da62f"}
	{"level":"info","ts":"2024-05-20T13:37:58.68562Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d468df581a6d993d","remote-peer-id":"990f835a719da62f"}
	{"level":"info","ts":"2024-05-20T13:37:58.685652Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"990f835a719da62f"}
	{"level":"info","ts":"2024-05-20T13:37:58.685677Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:37:58.685706Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:37:58.685763Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:37:58.685871Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:37:58.685976Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:37:58.686041Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d468df581a6d993d","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:37:58.686073Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"967c73ca63f4755d"}
	{"level":"info","ts":"2024-05-20T13:37:58.689208Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.92:2380"}
	{"level":"info","ts":"2024-05-20T13:37:58.689333Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.92:2380"}
	{"level":"info","ts":"2024-05-20T13:37:58.689374Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-170194","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.92:2380"],"advertise-client-urls":["https://192.168.39.92:2379"]}
	
	
	==> kernel <==
	 13:44:58 up 16 min,  0 users,  load average: 0.22, 0.39, 0.34
	Linux ha-170194 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [45c16067478b03d8b1129f03123a30e9e5f4cffa48fd0f6d8d66a55129c1b931] <==
	I0520 13:44:15.519156       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	I0520 13:44:25.534464       1 main.go:223] Handling node with IPs: map[192.168.39.92:{}]
	I0520 13:44:25.534508       1 main.go:227] handling current node
	I0520 13:44:25.534530       1 main.go:223] Handling node with IPs: map[192.168.39.155:{}]
	I0520 13:44:25.534536       1 main.go:250] Node ha-170194-m02 has CIDR [10.244.1.0/24] 
	I0520 13:44:25.534764       1 main.go:223] Handling node with IPs: map[192.168.39.163:{}]
	I0520 13:44:25.534798       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	I0520 13:44:35.541618       1 main.go:223] Handling node with IPs: map[192.168.39.92:{}]
	I0520 13:44:35.541641       1 main.go:227] handling current node
	I0520 13:44:35.541653       1 main.go:223] Handling node with IPs: map[192.168.39.155:{}]
	I0520 13:44:35.541658       1 main.go:250] Node ha-170194-m02 has CIDR [10.244.1.0/24] 
	I0520 13:44:35.541759       1 main.go:223] Handling node with IPs: map[192.168.39.163:{}]
	I0520 13:44:35.541764       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	I0520 13:44:45.558221       1 main.go:223] Handling node with IPs: map[192.168.39.92:{}]
	I0520 13:44:45.558381       1 main.go:227] handling current node
	I0520 13:44:45.558491       1 main.go:223] Handling node with IPs: map[192.168.39.155:{}]
	I0520 13:44:45.558522       1 main.go:250] Node ha-170194-m02 has CIDR [10.244.1.0/24] 
	I0520 13:44:45.558761       1 main.go:223] Handling node with IPs: map[192.168.39.163:{}]
	I0520 13:44:45.558823       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	I0520 13:44:55.566368       1 main.go:223] Handling node with IPs: map[192.168.39.92:{}]
	I0520 13:44:55.566512       1 main.go:227] handling current node
	I0520 13:44:55.566544       1 main.go:223] Handling node with IPs: map[192.168.39.155:{}]
	I0520 13:44:55.566567       1 main.go:250] Node ha-170194-m02 has CIDR [10.244.1.0/24] 
	I0520 13:44:55.566708       1 main.go:223] Handling node with IPs: map[192.168.39.163:{}]
	I0520 13:44:55.566732       1 main.go:250] Node ha-170194-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [6720f2ab5ded75d5497c5f267a5e6ce23c07eaab7d1c37a699eff5e21a187d99] <==
	I0520 13:39:37.900540       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0520 13:39:48.131268       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0520 13:39:49.536431       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 13:39:52.608562       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 13:39:59.328143       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.202:60718->10.96.0.1:443: read: connection reset by peer
	I0520 13:40:02.330359       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [ca3e13e017f2fa2c5625fcea967f30ed81a1fb98e926f9322dea1d8c9cf887e8] <==
	I0520 13:39:37.585083       1 options.go:221] external host was not specified, using 192.168.39.92
	I0520 13:39:37.593517       1 server.go:148] Version: v1.30.1
	I0520 13:39:37.593619       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:39:38.306349       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0520 13:39:38.317991       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 13:39:38.318371       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0520 13:39:38.318400       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0520 13:39:38.318569       1 instance.go:299] Using reconciler: lease
	W0520 13:39:58.307221       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0520 13:39:58.307336       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0520 13:39:58.319080       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f7785a144a0e759050cfe76f22440222e2f754cdf34117eb8599199c8ccb715d] <==
	I0520 13:40:19.207415       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0520 13:40:19.287353       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 13:40:19.287381       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 13:40:19.288723       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 13:40:19.289364       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 13:40:19.289754       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 13:40:19.290210       1 aggregator.go:165] initial CRD sync complete...
	I0520 13:40:19.290248       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 13:40:19.290254       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 13:40:19.290259       1 cache.go:39] Caches are synced for autoregister controller
	I0520 13:40:19.290416       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 13:40:19.290445       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 13:40:19.298205       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 13:40:19.298797       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 13:40:19.316803       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 13:40:19.316858       1 policy_source.go:224] refreshing policies
	W0520 13:40:19.326177       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.155 192.168.39.3]
	I0520 13:40:19.327817       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 13:40:19.336338       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0520 13:40:19.340034       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0520 13:40:19.383713       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 13:40:20.199323       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0520 13:40:20.556233       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.155 192.168.39.3 192.168.39.92]
	W0520 13:40:30.565582       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.155 192.168.39.92]
	W0520 13:42:40.564169       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.155 192.168.39.92]
	
	
	==> kube-controller-manager [49de9de51e3a448dc283be641c93a8ce95a1a415fb7865e3759421c56fefddd4] <==
	I0520 13:39:38.506182       1 serving.go:380] Generated self-signed cert in-memory
	I0520 13:39:38.854415       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0520 13:39:38.854475       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:39:38.856250       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0520 13:39:38.856399       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0520 13:39:38.856488       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0520 13:39:38.856724       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0520 13:39:59.326758       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.92:8443/healthz\": dial tcp 192.168.39.92:8443: connect: connection refused"
	
	
	==> kube-controller-manager [b195802788ab566858825c71059df3150db2401c9d5946c65949cdb2c9e71a42] <==
	I0520 13:42:22.634878       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.203µs"
	I0520 13:42:22.662857       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="131.845µs"
	I0520 13:42:22.967582       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.346µs"
	I0520 13:42:22.980327       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.655µs"
	I0520 13:42:24.195840       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.696973ms"
	I0520 13:42:24.196363       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.413µs"
	I0520 13:42:35.099597       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-170194-m04"
	E0520 13:42:35.136783       1 garbagecollector.go:399] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"storage.k8s.io/v1", Kind:"CSINode", Name:"ha-170194-m03", UID:"6ba94c14-f8bd-4560-9b6d-6fdbbd070e88", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_
:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-170194-m03", UID:"facfb3ef-385a-4306-8b7e-a5eb0e6f5923", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io "ha-170194-m03" not found
	E0520 13:42:51.662441       1 gc_controller.go:153] "Failed to get node" err="node \"ha-170194-m03\" not found" logger="pod-garbage-collector-controller" node="ha-170194-m03"
	E0520 13:42:51.662563       1 gc_controller.go:153] "Failed to get node" err="node \"ha-170194-m03\" not found" logger="pod-garbage-collector-controller" node="ha-170194-m03"
	E0520 13:42:51.662590       1 gc_controller.go:153] "Failed to get node" err="node \"ha-170194-m03\" not found" logger="pod-garbage-collector-controller" node="ha-170194-m03"
	E0520 13:42:51.662617       1 gc_controller.go:153] "Failed to get node" err="node \"ha-170194-m03\" not found" logger="pod-garbage-collector-controller" node="ha-170194-m03"
	E0520 13:42:51.662641       1 gc_controller.go:153] "Failed to get node" err="node \"ha-170194-m03\" not found" logger="pod-garbage-collector-controller" node="ha-170194-m03"
	I0520 13:43:06.795469       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-170194-m04"
	I0520 13:43:06.996676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.022214ms"
	I0520 13:43:06.996776       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.052µs"
	E0520 13:43:11.663546       1 gc_controller.go:153] "Failed to get node" err="node \"ha-170194-m03\" not found" logger="pod-garbage-collector-controller" node="ha-170194-m03"
	E0520 13:43:11.663711       1 gc_controller.go:153] "Failed to get node" err="node \"ha-170194-m03\" not found" logger="pod-garbage-collector-controller" node="ha-170194-m03"
	E0520 13:43:11.663793       1 gc_controller.go:153] "Failed to get node" err="node \"ha-170194-m03\" not found" logger="pod-garbage-collector-controller" node="ha-170194-m03"
	E0520 13:43:11.663822       1 gc_controller.go:153] "Failed to get node" err="node \"ha-170194-m03\" not found" logger="pod-garbage-collector-controller" node="ha-170194-m03"
	E0520 13:43:11.663879       1 gc_controller.go:153] "Failed to get node" err="node \"ha-170194-m03\" not found" logger="pod-garbage-collector-controller" node="ha-170194-m03"
	I0520 13:43:12.049227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.459745ms"
	I0520 13:43:12.050211       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="181.206µs"
	I0520 13:43:24.453085       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.706035ms"
	I0520 13:43:24.453237       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.808µs"
	
	
	==> kube-proxy [2ca782f6be5aac84aca40cce645d0255c354e6d21013160a56d26ea90fd8051b] <==
	E0520 13:36:41.504390       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:36:41.504523       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:36:41.504631       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:36:48.736491       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1774": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:36:48.736639       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1774": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:36:48.736644       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:36:48.738077       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:36:48.736492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:36:48.738155       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:36:57.953097       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:36:57.953309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:36:57.953481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1774": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:36:57.953570       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1774": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:37:01.025856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:37:01.026070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:37:16.385872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:37:16.385990       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:37:16.386108       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1774": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:37:16.386140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1774": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:37:28.673455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:37:28.674191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:37:50.176518       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:37:50.177191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-170194&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 13:37:53.249191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1774": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 13:37:53.249391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1774": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [3df09e40b72acd0754deff4f383a3f3ea5658277934b59145e3842be2182ce97] <==
	I0520 13:39:38.736717       1 server_linux.go:69] "Using iptables proxy"
	E0520 13:39:40.769149       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-170194\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 13:39:43.840607       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-170194\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 13:39:46.912709       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-170194\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 13:39:53.068345       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-170194\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 13:40:05.345686       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-170194\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0520 13:40:23.435261       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.92"]
	I0520 13:40:23.480236       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 13:40:23.480350       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 13:40:23.480381       1 server_linux.go:165] "Using iptables Proxier"
	I0520 13:40:23.482810       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 13:40:23.483197       1 server.go:872] "Version info" version="v1.30.1"
	I0520 13:40:23.483473       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:40:23.485294       1 config.go:192] "Starting service config controller"
	I0520 13:40:23.485371       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 13:40:23.485414       1 config.go:101] "Starting endpoint slice config controller"
	I0520 13:40:23.485430       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 13:40:23.486336       1 config.go:319] "Starting node config controller"
	I0520 13:40:23.487398       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 13:40:23.586184       1 shared_informer.go:320] Caches are synced for service config
	I0520 13:40:23.586350       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 13:40:23.587693       1 shared_informer.go:320] Caches are synced for node config
	W0520 13:43:25.583684       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0520 13:43:25.583684       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0520 13:43:25.583791       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [5c64aa00ef2d64c3757c1d4fc0eeb94d8397d4a8fc011a7aeccb9edb1566b91b] <==
	W0520 13:40:14.347147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.92:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	E0520 13:40:14.347219       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.92:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	W0520 13:40:14.408327       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.92:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	E0520 13:40:14.408447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.92:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	W0520 13:40:15.009193       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.92:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	E0520 13:40:15.009268       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.92:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	W0520 13:40:16.183381       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.92:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	E0520 13:40:16.183500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.92:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	W0520 13:40:16.869485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.92:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	E0520 13:40:16.869559       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.92:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	W0520 13:40:16.959516       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.92:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	E0520 13:40:16.959583       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.92:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	W0520 13:40:17.039559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.92:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	E0520 13:40:17.039625       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.92:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	W0520 13:40:17.116491       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.92:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	E0520 13:40:17.116560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.92:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.92:8443: connect: connection refused
	W0520 13:40:19.215485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 13:40:19.215692       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 13:40:19.216670       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 13:40:19.224986       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 13:40:19.216831       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 13:40:19.220874       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 13:40:19.225319       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 13:40:19.225301       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0520 13:40:32.933537       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d125c402bd4cbcf3055b9a3385deea3773d86238de63e0749c82f14501883fd8] <==
	W0520 13:37:51.909356       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 13:37:51.909543       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 13:37:52.186504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 13:37:52.186546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 13:37:52.657822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 13:37:52.658002       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 13:37:52.791294       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 13:37:52.791360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 13:37:52.955870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 13:37:52.956788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 13:37:53.098534       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 13:37:53.098585       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 13:37:53.231417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 13:37:53.231471       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 13:37:53.518500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 13:37:53.518658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 13:37:54.185987       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 13:37:54.186119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 13:37:54.335463       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 13:37:54.335511       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 13:37:54.862886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 13:37:54.862951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0520 13:37:58.587647       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0520 13:37:58.587975       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0520 13:37:58.588083       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 20 13:40:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:40:34 ha-170194 kubelet[1373]: I0520 13:40:34.390780    1373 scope.go:117] "RemoveContainer" containerID="aec5b752545e8d9abd4d44817bed499e6bef842a475cd12e2a3dee7cadd5e0dc"
	May 20 13:40:56 ha-170194 kubelet[1373]: I0520 13:40:56.252551    1373 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-170194" podUID="aed1bd37-f323-4950-b9d0-43e5e2eef5b7"
	May 20 13:40:56 ha-170194 kubelet[1373]: I0520 13:40:56.290516    1373 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-170194"
	May 20 13:40:56 ha-170194 kubelet[1373]: I0520 13:40:56.832309    1373 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-170194" podUID="aed1bd37-f323-4950-b9d0-43e5e2eef5b7"
	May 20 13:41:34 ha-170194 kubelet[1373]: E0520 13:41:34.278317    1373 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:41:34 ha-170194 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:41:34 ha-170194 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:41:34 ha-170194 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:41:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:42:34 ha-170194 kubelet[1373]: E0520 13:42:34.276710    1373 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:42:34 ha-170194 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:42:34 ha-170194 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:42:34 ha-170194 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:42:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:43:34 ha-170194 kubelet[1373]: E0520 13:43:34.276481    1373 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:43:34 ha-170194 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:43:34 ha-170194 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:43:34 ha-170194 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:43:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:44:34 ha-170194 kubelet[1373]: E0520 13:44:34.277318    1373 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:44:34 ha-170194 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:44:34 ha-170194 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:44:34 ha-170194 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:44:34 ha-170194 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 13:44:57.085314  632892 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18929-602525/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-170194 -n ha-170194
helpers_test.go:261: (dbg) Run:  kubectl --context ha-170194 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (304.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-114485
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-114485
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-114485: exit status 82 (2m1.950678915s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-114485-m03"  ...
	* Stopping node "multinode-114485-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-114485" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-114485 --wait=true -v=8 --alsologtostderr
E0520 14:01:59.762527  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 14:03:01.807865  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-114485 --wait=true -v=8 --alsologtostderr: (2m59.732788063s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-114485
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-114485 -n multinode-114485
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-114485 logs -n 25: (1.556093805s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-114485 cp multinode-114485-m02:/home/docker/cp-test.txt                       | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile345453774/001/cp-test_multinode-114485-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-114485 cp multinode-114485-m02:/home/docker/cp-test.txt                       | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485:/home/docker/cp-test_multinode-114485-m02_multinode-114485.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n multinode-114485 sudo cat                                       | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | /home/docker/cp-test_multinode-114485-m02_multinode-114485.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-114485 cp multinode-114485-m02:/home/docker/cp-test.txt                       | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m03:/home/docker/cp-test_multinode-114485-m02_multinode-114485-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n multinode-114485-m03 sudo cat                                   | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | /home/docker/cp-test_multinode-114485-m02_multinode-114485-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-114485 cp testdata/cp-test.txt                                                | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-114485 cp multinode-114485-m03:/home/docker/cp-test.txt                       | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile345453774/001/cp-test_multinode-114485-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-114485 cp multinode-114485-m03:/home/docker/cp-test.txt                       | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485:/home/docker/cp-test_multinode-114485-m03_multinode-114485.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n multinode-114485 sudo cat                                       | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | /home/docker/cp-test_multinode-114485-m03_multinode-114485.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-114485 cp multinode-114485-m03:/home/docker/cp-test.txt                       | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m02:/home/docker/cp-test_multinode-114485-m03_multinode-114485-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n multinode-114485-m02 sudo cat                                   | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | /home/docker/cp-test_multinode-114485-m03_multinode-114485-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-114485 node stop m03                                                          | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	| node    | multinode-114485 node start                                                             | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:59 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-114485                                                                | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:59 UTC |                     |
	| stop    | -p multinode-114485                                                                     | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:59 UTC |                     |
	| start   | -p multinode-114485                                                                     | multinode-114485 | jenkins | v1.33.1 | 20 May 24 14:01 UTC | 20 May 24 14:04 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-114485                                                                | multinode-114485 | jenkins | v1.33.1 | 20 May 24 14:04 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 14:01:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 14:01:05.176049  642041 out.go:291] Setting OutFile to fd 1 ...
	I0520 14:01:05.176323  642041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 14:01:05.176333  642041 out.go:304] Setting ErrFile to fd 2...
	I0520 14:01:05.176337  642041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 14:01:05.176543  642041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 14:01:05.177139  642041 out.go:298] Setting JSON to false
	I0520 14:01:05.178178  642041 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":13405,"bootTime":1716200260,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 14:01:05.178239  642041 start.go:139] virtualization: kvm guest
	I0520 14:01:05.181364  642041 out.go:177] * [multinode-114485] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 14:01:05.183580  642041 notify.go:220] Checking for updates...
	I0520 14:01:05.183590  642041 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 14:01:05.186004  642041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 14:01:05.188275  642041 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 14:01:05.190405  642041 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 14:01:05.192514  642041 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 14:01:05.194728  642041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 14:01:05.197257  642041 config.go:182] Loaded profile config "multinode-114485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:01:05.197358  642041 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 14:01:05.197760  642041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:01:05.197818  642041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:01:05.214141  642041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39945
	I0520 14:01:05.214695  642041 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:01:05.215362  642041 main.go:141] libmachine: Using API Version  1
	I0520 14:01:05.215393  642041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:01:05.215727  642041 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:01:05.215892  642041 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 14:01:05.254342  642041 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 14:01:05.256465  642041 start.go:297] selected driver: kvm2
	I0520 14:01:05.256489  642041 start.go:901] validating driver "kvm2" against &{Name:multinode-114485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-114485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:01:05.256664  642041 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 14:01:05.257034  642041 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 14:01:05.257151  642041 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 14:01:05.273751  642041 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 14:01:05.274476  642041 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 14:01:05.274533  642041 cni.go:84] Creating CNI manager for ""
	I0520 14:01:05.274542  642041 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0520 14:01:05.274604  642041 start.go:340] cluster config:
	{Name:multinode-114485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-114485 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:01:05.274782  642041 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 14:01:05.278902  642041 out.go:177] * Starting "multinode-114485" primary control-plane node in "multinode-114485" cluster
	I0520 14:01:05.281093  642041 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 14:01:05.281132  642041 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 14:01:05.281146  642041 cache.go:56] Caching tarball of preloaded images
	I0520 14:01:05.281257  642041 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 14:01:05.281272  642041 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 14:01:05.281440  642041 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/config.json ...
	I0520 14:01:05.281682  642041 start.go:360] acquireMachinesLock for multinode-114485: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 14:01:05.281764  642041 start.go:364] duration metric: took 60.785µs to acquireMachinesLock for "multinode-114485"
	I0520 14:01:05.281785  642041 start.go:96] Skipping create...Using existing machine configuration
	I0520 14:01:05.281800  642041 fix.go:54] fixHost starting: 
	I0520 14:01:05.282087  642041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:01:05.282124  642041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:01:05.297290  642041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34033
	I0520 14:01:05.298277  642041 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:01:05.299210  642041 main.go:141] libmachine: Using API Version  1
	I0520 14:01:05.299234  642041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:01:05.299602  642041 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:01:05.299833  642041 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 14:01:05.300033  642041 main.go:141] libmachine: (multinode-114485) Calling .GetState
	I0520 14:01:05.301665  642041 fix.go:112] recreateIfNeeded on multinode-114485: state=Running err=<nil>
	W0520 14:01:05.301687  642041 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 14:01:05.304297  642041 out.go:177] * Updating the running kvm2 "multinode-114485" VM ...
	I0520 14:01:05.306341  642041 machine.go:94] provisionDockerMachine start ...
	I0520 14:01:05.306369  642041 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 14:01:05.306591  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:01:05.308913  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.309410  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:01:05.309440  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.309553  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:01:05.309744  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:05.309909  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:05.310055  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:01:05.310252  642041 main.go:141] libmachine: Using SSH client type: native
	I0520 14:01:05.310425  642041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0520 14:01:05.310435  642041 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 14:01:05.426420  642041 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-114485
	
	I0520 14:01:05.426453  642041 main.go:141] libmachine: (multinode-114485) Calling .GetMachineName
	I0520 14:01:05.426713  642041 buildroot.go:166] provisioning hostname "multinode-114485"
	I0520 14:01:05.426738  642041 main.go:141] libmachine: (multinode-114485) Calling .GetMachineName
	I0520 14:01:05.426918  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:01:05.429745  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.430236  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:01:05.430276  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.430359  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:01:05.430531  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:05.430651  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:05.430781  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:01:05.430979  642041 main.go:141] libmachine: Using SSH client type: native
	I0520 14:01:05.431149  642041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0520 14:01:05.431162  642041 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-114485 && echo "multinode-114485" | sudo tee /etc/hostname
	I0520 14:01:05.568302  642041 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-114485
	
	I0520 14:01:05.568346  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:01:05.571445  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.571839  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:01:05.571867  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.572045  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:01:05.572248  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:05.572419  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:05.572588  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:01:05.572743  642041 main.go:141] libmachine: Using SSH client type: native
	I0520 14:01:05.572899  642041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0520 14:01:05.572916  642041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-114485' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-114485/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-114485' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 14:01:05.682797  642041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 14:01:05.682835  642041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 14:01:05.682857  642041 buildroot.go:174] setting up certificates
	I0520 14:01:05.682865  642041 provision.go:84] configureAuth start
	I0520 14:01:05.682874  642041 main.go:141] libmachine: (multinode-114485) Calling .GetMachineName
	I0520 14:01:05.683154  642041 main.go:141] libmachine: (multinode-114485) Calling .GetIP
	I0520 14:01:05.686071  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.686302  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:01:05.686331  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.686541  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:01:05.688925  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.689289  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:01:05.689323  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.689448  642041 provision.go:143] copyHostCerts
	I0520 14:01:05.689483  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 14:01:05.689550  642041 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 14:01:05.689571  642041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 14:01:05.689690  642041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 14:01:05.689833  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 14:01:05.689860  642041 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 14:01:05.689868  642041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 14:01:05.689913  642041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 14:01:05.689985  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 14:01:05.690009  642041 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 14:01:05.690017  642041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 14:01:05.690045  642041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 14:01:05.690111  642041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.multinode-114485 san=[127.0.0.1 192.168.39.141 localhost minikube multinode-114485]
	I0520 14:01:05.866385  642041 provision.go:177] copyRemoteCerts
	I0520 14:01:05.866474  642041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 14:01:05.866501  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:01:05.869642  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.870152  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:01:05.870180  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.870416  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:01:05.870623  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:05.870807  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:01:05.871005  642041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/multinode-114485/id_rsa Username:docker}
	I0520 14:01:05.955343  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 14:01:05.955423  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 14:01:05.982335  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 14:01:05.982410  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0520 14:01:06.005173  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 14:01:06.005252  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 14:01:06.028983  642041 provision.go:87] duration metric: took 346.104412ms to configureAuth
	I0520 14:01:06.029009  642041 buildroot.go:189] setting minikube options for container-runtime
	I0520 14:01:06.029221  642041 config.go:182] Loaded profile config "multinode-114485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:01:06.029314  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:01:06.032312  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:06.032898  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:01:06.032931  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:06.033178  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:01:06.033408  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:06.033629  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:06.033803  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:01:06.033995  642041 main.go:141] libmachine: Using SSH client type: native
	I0520 14:01:06.034179  642041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0520 14:01:06.034200  642041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 14:02:36.769867  642041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 14:02:36.769938  642041 machine.go:97] duration metric: took 1m31.46357262s to provisionDockerMachine
	I0520 14:02:36.769954  642041 start.go:293] postStartSetup for "multinode-114485" (driver="kvm2")
	I0520 14:02:36.769975  642041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 14:02:36.769999  642041 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 14:02:36.770379  642041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 14:02:36.770410  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:02:36.773475  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:36.773921  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:02:36.773948  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:36.774097  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:02:36.774334  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:02:36.774513  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:02:36.774665  642041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/multinode-114485/id_rsa Username:docker}
	I0520 14:02:36.861348  642041 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 14:02:36.865269  642041 command_runner.go:130] > NAME=Buildroot
	I0520 14:02:36.865294  642041 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 14:02:36.865301  642041 command_runner.go:130] > ID=buildroot
	I0520 14:02:36.865309  642041 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 14:02:36.865316  642041 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 14:02:36.865361  642041 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 14:02:36.865377  642041 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 14:02:36.865442  642041 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 14:02:36.865534  642041 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 14:02:36.865547  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /etc/ssl/certs/6098672.pem
	I0520 14:02:36.865631  642041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 14:02:36.874596  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 14:02:36.897015  642041 start.go:296] duration metric: took 127.04368ms for postStartSetup
	I0520 14:02:36.897093  642041 fix.go:56] duration metric: took 1m31.615297003s for fixHost
	I0520 14:02:36.897138  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:02:36.899571  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:36.899907  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:02:36.899940  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:36.900118  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:02:36.900361  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:02:36.900515  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:02:36.900687  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:02:36.900892  642041 main.go:141] libmachine: Using SSH client type: native
	I0520 14:02:36.901078  642041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0520 14:02:36.901089  642041 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 14:02:37.010467  642041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716213756.987615858
	
	I0520 14:02:37.010491  642041 fix.go:216] guest clock: 1716213756.987615858
	I0520 14:02:37.010501  642041 fix.go:229] Guest: 2024-05-20 14:02:36.987615858 +0000 UTC Remote: 2024-05-20 14:02:36.897100023 +0000 UTC m=+91.756949501 (delta=90.515835ms)
	I0520 14:02:37.010528  642041 fix.go:200] guest clock delta is within tolerance: 90.515835ms
	I0520 14:02:37.010535  642041 start.go:83] releasing machines lock for "multinode-114485", held for 1m31.728757337s
	I0520 14:02:37.010557  642041 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 14:02:37.010844  642041 main.go:141] libmachine: (multinode-114485) Calling .GetIP
	I0520 14:02:37.012989  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:37.013425  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:02:37.013460  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:37.013635  642041 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 14:02:37.014154  642041 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 14:02:37.014371  642041 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 14:02:37.014462  642041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 14:02:37.014516  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:02:37.014616  642041 ssh_runner.go:195] Run: cat /version.json
	I0520 14:02:37.014635  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:02:37.016903  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:37.017311  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:02:37.017341  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:37.017368  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:37.017491  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:02:37.017666  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:02:37.017802  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:02:37.017925  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:02:37.017946  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:37.017943  642041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/multinode-114485/id_rsa Username:docker}
	I0520 14:02:37.018102  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:02:37.018266  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:02:37.018435  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:02:37.018587  642041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/multinode-114485/id_rsa Username:docker}
	I0520 14:02:37.099083  642041 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	I0520 14:02:37.130513  642041 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	W0520 14:02:37.131359  642041 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 14:02:37.131488  642041 ssh_runner.go:195] Run: systemctl --version
	I0520 14:02:37.137215  642041 command_runner.go:130] > systemd 252 (252)
	I0520 14:02:37.137277  642041 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 14:02:37.137414  642041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 14:02:37.294809  642041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 14:02:37.300283  642041 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 14:02:37.300327  642041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 14:02:37.300376  642041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 14:02:37.309196  642041 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 14:02:37.309225  642041 start.go:494] detecting cgroup driver to use...
	I0520 14:02:37.309356  642041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 14:02:37.325558  642041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 14:02:37.338166  642041 docker.go:217] disabling cri-docker service (if available) ...
	I0520 14:02:37.338236  642041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 14:02:37.351663  642041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 14:02:37.364828  642041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 14:02:37.507102  642041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 14:02:37.651121  642041 docker.go:233] disabling docker service ...
	I0520 14:02:37.651199  642041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 14:02:37.667573  642041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 14:02:37.680896  642041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 14:02:37.818304  642041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 14:02:37.963082  642041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 14:02:37.976610  642041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 14:02:37.994083  642041 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0520 14:02:37.994612  642041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 14:02:37.994673  642041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:02:38.004754  642041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 14:02:38.004838  642041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:02:38.015127  642041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:02:38.024888  642041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:02:38.034422  642041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 14:02:38.044663  642041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:02:38.054709  642041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:02:38.065605  642041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:02:38.076094  642041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 14:02:38.085220  642041 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 14:02:38.085339  642041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 14:02:38.094475  642041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:02:38.233069  642041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 14:02:39.742685  642041 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.509573316s)
	I0520 14:02:39.742716  642041 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 14:02:39.742769  642041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 14:02:39.747790  642041 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0520 14:02:39.747812  642041 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 14:02:39.747819  642041 command_runner.go:130] > Device: 0,22	Inode: 1326        Links: 1
	I0520 14:02:39.747825  642041 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 14:02:39.747830  642041 command_runner.go:130] > Access: 2024-05-20 14:02:39.590135494 +0000
	I0520 14:02:39.747837  642041 command_runner.go:130] > Modify: 2024-05-20 14:02:39.590135494 +0000
	I0520 14:02:39.747842  642041 command_runner.go:130] > Change: 2024-05-20 14:02:39.590135494 +0000
	I0520 14:02:39.747848  642041 command_runner.go:130] >  Birth: -
	I0520 14:02:39.747883  642041 start.go:562] Will wait 60s for crictl version
	I0520 14:02:39.747946  642041 ssh_runner.go:195] Run: which crictl
	I0520 14:02:39.751678  642041 command_runner.go:130] > /usr/bin/crictl
	I0520 14:02:39.751754  642041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 14:02:39.786771  642041 command_runner.go:130] > Version:  0.1.0
	I0520 14:02:39.786797  642041 command_runner.go:130] > RuntimeName:  cri-o
	I0520 14:02:39.786805  642041 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0520 14:02:39.786812  642041 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 14:02:39.788050  642041 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 14:02:39.788150  642041 ssh_runner.go:195] Run: crio --version
	I0520 14:02:39.814493  642041 command_runner.go:130] > crio version 1.29.1
	I0520 14:02:39.814519  642041 command_runner.go:130] > Version:        1.29.1
	I0520 14:02:39.814528  642041 command_runner.go:130] > GitCommit:      unknown
	I0520 14:02:39.814534  642041 command_runner.go:130] > GitCommitDate:  unknown
	I0520 14:02:39.814540  642041 command_runner.go:130] > GitTreeState:   clean
	I0520 14:02:39.814547  642041 command_runner.go:130] > BuildDate:      2024-05-13T16:07:33Z
	I0520 14:02:39.814552  642041 command_runner.go:130] > GoVersion:      go1.21.6
	I0520 14:02:39.814558  642041 command_runner.go:130] > Compiler:       gc
	I0520 14:02:39.814565  642041 command_runner.go:130] > Platform:       linux/amd64
	I0520 14:02:39.814570  642041 command_runner.go:130] > Linkmode:       dynamic
	I0520 14:02:39.814578  642041 command_runner.go:130] > BuildTags:      
	I0520 14:02:39.814585  642041 command_runner.go:130] >   containers_image_ostree_stub
	I0520 14:02:39.814596  642041 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0520 14:02:39.814602  642041 command_runner.go:130] >   btrfs_noversion
	I0520 14:02:39.814611  642041 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0520 14:02:39.814621  642041 command_runner.go:130] >   libdm_no_deferred_remove
	I0520 14:02:39.814628  642041 command_runner.go:130] >   seccomp
	I0520 14:02:39.814645  642041 command_runner.go:130] > LDFlags:          unknown
	I0520 14:02:39.814655  642041 command_runner.go:130] > SeccompEnabled:   true
	I0520 14:02:39.814663  642041 command_runner.go:130] > AppArmorEnabled:  false
	I0520 14:02:39.815765  642041 ssh_runner.go:195] Run: crio --version
	I0520 14:02:39.841483  642041 command_runner.go:130] > crio version 1.29.1
	I0520 14:02:39.841514  642041 command_runner.go:130] > Version:        1.29.1
	I0520 14:02:39.841523  642041 command_runner.go:130] > GitCommit:      unknown
	I0520 14:02:39.841530  642041 command_runner.go:130] > GitCommitDate:  unknown
	I0520 14:02:39.841537  642041 command_runner.go:130] > GitTreeState:   clean
	I0520 14:02:39.841546  642041 command_runner.go:130] > BuildDate:      2024-05-13T16:07:33Z
	I0520 14:02:39.841553  642041 command_runner.go:130] > GoVersion:      go1.21.6
	I0520 14:02:39.841561  642041 command_runner.go:130] > Compiler:       gc
	I0520 14:02:39.841568  642041 command_runner.go:130] > Platform:       linux/amd64
	I0520 14:02:39.841578  642041 command_runner.go:130] > Linkmode:       dynamic
	I0520 14:02:39.841586  642041 command_runner.go:130] > BuildTags:      
	I0520 14:02:39.841596  642041 command_runner.go:130] >   containers_image_ostree_stub
	I0520 14:02:39.841606  642041 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0520 14:02:39.841616  642041 command_runner.go:130] >   btrfs_noversion
	I0520 14:02:39.841626  642041 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0520 14:02:39.841635  642041 command_runner.go:130] >   libdm_no_deferred_remove
	I0520 14:02:39.841641  642041 command_runner.go:130] >   seccomp
	I0520 14:02:39.841651  642041 command_runner.go:130] > LDFlags:          unknown
	I0520 14:02:39.841658  642041 command_runner.go:130] > SeccompEnabled:   true
	I0520 14:02:39.841668  642041 command_runner.go:130] > AppArmorEnabled:  false
	I0520 14:02:39.849023  642041 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 14:02:39.851282  642041 main.go:141] libmachine: (multinode-114485) Calling .GetIP
	I0520 14:02:39.854086  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:39.854503  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:02:39.854535  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:39.854742  642041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 14:02:39.860812  642041 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0520 14:02:39.861237  642041 kubeadm.go:877] updating cluster {Name:multinode-114485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-114485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 14:02:39.861390  642041 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 14:02:39.861448  642041 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 14:02:39.897694  642041 command_runner.go:130] > {
	I0520 14:02:39.897718  642041 command_runner.go:130] >   "images": [
	I0520 14:02:39.897722  642041 command_runner.go:130] >     {
	I0520 14:02:39.897730  642041 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0520 14:02:39.897735  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.897741  642041 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0520 14:02:39.897744  642041 command_runner.go:130] >       ],
	I0520 14:02:39.897748  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.897757  642041 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0520 14:02:39.897767  642041 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0520 14:02:39.897772  642041 command_runner.go:130] >       ],
	I0520 14:02:39.897779  642041 command_runner.go:130] >       "size": "65291810",
	I0520 14:02:39.897787  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.897792  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.897807  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.897815  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.897821  642041 command_runner.go:130] >     },
	I0520 14:02:39.897826  642041 command_runner.go:130] >     {
	I0520 14:02:39.897836  642041 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0520 14:02:39.897842  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.897848  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0520 14:02:39.897852  642041 command_runner.go:130] >       ],
	I0520 14:02:39.897856  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.897863  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0520 14:02:39.897870  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0520 14:02:39.897875  642041 command_runner.go:130] >       ],
	I0520 14:02:39.897880  642041 command_runner.go:130] >       "size": "1363676",
	I0520 14:02:39.897887  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.897905  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.897915  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.897923  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.897934  642041 command_runner.go:130] >     },
	I0520 14:02:39.897942  642041 command_runner.go:130] >     {
	I0520 14:02:39.897949  642041 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0520 14:02:39.897956  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.897961  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0520 14:02:39.897967  642041 command_runner.go:130] >       ],
	I0520 14:02:39.897972  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.897988  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0520 14:02:39.898004  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0520 14:02:39.898014  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898024  642041 command_runner.go:130] >       "size": "31470524",
	I0520 14:02:39.898034  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.898042  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.898046  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.898052  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.898056  642041 command_runner.go:130] >     },
	I0520 14:02:39.898061  642041 command_runner.go:130] >     {
	I0520 14:02:39.898068  642041 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0520 14:02:39.898074  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.898081  642041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0520 14:02:39.898090  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898103  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.898119  642041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0520 14:02:39.898145  642041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0520 14:02:39.898155  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898160  642041 command_runner.go:130] >       "size": "61245718",
	I0520 14:02:39.898167  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.898171  642041 command_runner.go:130] >       "username": "nonroot",
	I0520 14:02:39.898178  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.898184  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.898192  642041 command_runner.go:130] >     },
	I0520 14:02:39.898200  642041 command_runner.go:130] >     {
	I0520 14:02:39.898214  642041 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0520 14:02:39.898224  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.898235  642041 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0520 14:02:39.898244  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898253  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.898269  642041 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0520 14:02:39.898281  642041 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0520 14:02:39.898289  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898295  642041 command_runner.go:130] >       "size": "150779692",
	I0520 14:02:39.898305  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.898312  642041 command_runner.go:130] >         "value": "0"
	I0520 14:02:39.898321  642041 command_runner.go:130] >       },
	I0520 14:02:39.898332  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.898341  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.898350  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.898358  642041 command_runner.go:130] >     },
	I0520 14:02:39.898366  642041 command_runner.go:130] >     {
	I0520 14:02:39.898379  642041 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0520 14:02:39.898386  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.898393  642041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0520 14:02:39.898402  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898412  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.898427  642041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0520 14:02:39.898441  642041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0520 14:02:39.898450  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898459  642041 command_runner.go:130] >       "size": "117601759",
	I0520 14:02:39.898466  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.898471  642041 command_runner.go:130] >         "value": "0"
	I0520 14:02:39.898479  642041 command_runner.go:130] >       },
	I0520 14:02:39.898489  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.898499  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.898508  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.898517  642041 command_runner.go:130] >     },
	I0520 14:02:39.898525  642041 command_runner.go:130] >     {
	I0520 14:02:39.898535  642041 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0520 14:02:39.898545  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.898554  642041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0520 14:02:39.898561  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898567  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.898584  642041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0520 14:02:39.898598  642041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0520 14:02:39.898609  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898618  642041 command_runner.go:130] >       "size": "112170310",
	I0520 14:02:39.898627  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.898636  642041 command_runner.go:130] >         "value": "0"
	I0520 14:02:39.898642  642041 command_runner.go:130] >       },
	I0520 14:02:39.898647  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.898657  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.898666  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.898672  642041 command_runner.go:130] >     },
	I0520 14:02:39.898682  642041 command_runner.go:130] >     {
	I0520 14:02:39.898695  642041 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0520 14:02:39.898705  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.898716  642041 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0520 14:02:39.898724  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898733  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.898760  642041 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0520 14:02:39.898772  642041 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0520 14:02:39.898777  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898784  642041 command_runner.go:130] >       "size": "85933465",
	I0520 14:02:39.898790  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.898796  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.898803  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.898811  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.898816  642041 command_runner.go:130] >     },
	I0520 14:02:39.898821  642041 command_runner.go:130] >     {
	I0520 14:02:39.898827  642041 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0520 14:02:39.898835  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.898844  642041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0520 14:02:39.898849  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898856  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.898870  642041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0520 14:02:39.898885  642041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0520 14:02:39.898893  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898900  642041 command_runner.go:130] >       "size": "63026504",
	I0520 14:02:39.898909  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.898913  642041 command_runner.go:130] >         "value": "0"
	I0520 14:02:39.898926  642041 command_runner.go:130] >       },
	I0520 14:02:39.898954  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.898962  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.898968  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.898973  642041 command_runner.go:130] >     },
	I0520 14:02:39.898978  642041 command_runner.go:130] >     {
	I0520 14:02:39.898987  642041 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0520 14:02:39.898996  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.899006  642041 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0520 14:02:39.899011  642041 command_runner.go:130] >       ],
	I0520 14:02:39.899020  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.899033  642041 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0520 14:02:39.899047  642041 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0520 14:02:39.899055  642041 command_runner.go:130] >       ],
	I0520 14:02:39.899061  642041 command_runner.go:130] >       "size": "750414",
	I0520 14:02:39.899071  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.899078  642041 command_runner.go:130] >         "value": "65535"
	I0520 14:02:39.899087  642041 command_runner.go:130] >       },
	I0520 14:02:39.899093  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.899103  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.899110  642041 command_runner.go:130] >       "pinned": true
	I0520 14:02:39.899118  642041 command_runner.go:130] >     }
	I0520 14:02:39.899124  642041 command_runner.go:130] >   ]
	I0520 14:02:39.899133  642041 command_runner.go:130] > }
	I0520 14:02:39.899365  642041 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 14:02:39.899381  642041 crio.go:433] Images already preloaded, skipping extraction
	I0520 14:02:39.899432  642041 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 14:02:39.930098  642041 command_runner.go:130] > {
	I0520 14:02:39.930127  642041 command_runner.go:130] >   "images": [
	I0520 14:02:39.930133  642041 command_runner.go:130] >     {
	I0520 14:02:39.930146  642041 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0520 14:02:39.930153  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.930162  642041 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0520 14:02:39.930167  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930175  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.930189  642041 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0520 14:02:39.930203  642041 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0520 14:02:39.930213  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930223  642041 command_runner.go:130] >       "size": "65291810",
	I0520 14:02:39.930232  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.930241  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.930259  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.930269  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.930275  642041 command_runner.go:130] >     },
	I0520 14:02:39.930284  642041 command_runner.go:130] >     {
	I0520 14:02:39.930298  642041 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0520 14:02:39.930308  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.930319  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0520 14:02:39.930327  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930338  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.930352  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0520 14:02:39.930367  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0520 14:02:39.930375  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930384  642041 command_runner.go:130] >       "size": "1363676",
	I0520 14:02:39.930392  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.930403  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.930412  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.930420  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.930428  642041 command_runner.go:130] >     },
	I0520 14:02:39.930436  642041 command_runner.go:130] >     {
	I0520 14:02:39.930444  642041 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0520 14:02:39.930453  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.930461  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0520 14:02:39.930469  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930478  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.930491  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0520 14:02:39.930506  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0520 14:02:39.930514  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930524  642041 command_runner.go:130] >       "size": "31470524",
	I0520 14:02:39.930533  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.930543  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.930548  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.930557  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.930564  642041 command_runner.go:130] >     },
	I0520 14:02:39.930571  642041 command_runner.go:130] >     {
	I0520 14:02:39.930580  642041 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0520 14:02:39.930589  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.930600  642041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0520 14:02:39.930608  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930617  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.930630  642041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0520 14:02:39.930648  642041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0520 14:02:39.930657  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930667  642041 command_runner.go:130] >       "size": "61245718",
	I0520 14:02:39.930676  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.930686  642041 command_runner.go:130] >       "username": "nonroot",
	I0520 14:02:39.930695  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.930704  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.930713  642041 command_runner.go:130] >     },
	I0520 14:02:39.930719  642041 command_runner.go:130] >     {
	I0520 14:02:39.930732  642041 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0520 14:02:39.930741  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.930751  642041 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0520 14:02:39.930759  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930769  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.930783  642041 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0520 14:02:39.930796  642041 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0520 14:02:39.930804  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930810  642041 command_runner.go:130] >       "size": "150779692",
	I0520 14:02:39.930819  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.930828  642041 command_runner.go:130] >         "value": "0"
	I0520 14:02:39.930837  642041 command_runner.go:130] >       },
	I0520 14:02:39.930846  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.930857  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.930867  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.930875  642041 command_runner.go:130] >     },
	I0520 14:02:39.930883  642041 command_runner.go:130] >     {
	I0520 14:02:39.930895  642041 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0520 14:02:39.930904  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.930914  642041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0520 14:02:39.930922  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930938  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.930953  642041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0520 14:02:39.930967  642041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0520 14:02:39.930975  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930981  642041 command_runner.go:130] >       "size": "117601759",
	I0520 14:02:39.930988  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.930993  642041 command_runner.go:130] >         "value": "0"
	I0520 14:02:39.931002  642041 command_runner.go:130] >       },
	I0520 14:02:39.931007  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.931014  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.931023  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.931030  642041 command_runner.go:130] >     },
	I0520 14:02:39.931038  642041 command_runner.go:130] >     {
	I0520 14:02:39.931046  642041 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0520 14:02:39.931054  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.931065  642041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0520 14:02:39.931071  642041 command_runner.go:130] >       ],
	I0520 14:02:39.931077  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.931091  642041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0520 14:02:39.931105  642041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0520 14:02:39.931114  642041 command_runner.go:130] >       ],
	I0520 14:02:39.931124  642041 command_runner.go:130] >       "size": "112170310",
	I0520 14:02:39.931134  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.931143  642041 command_runner.go:130] >         "value": "0"
	I0520 14:02:39.931151  642041 command_runner.go:130] >       },
	I0520 14:02:39.931161  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.931170  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.931179  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.931188  642041 command_runner.go:130] >     },
	I0520 14:02:39.931197  642041 command_runner.go:130] >     {
	I0520 14:02:39.931210  642041 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0520 14:02:39.931223  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.931235  642041 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0520 14:02:39.931243  642041 command_runner.go:130] >       ],
	I0520 14:02:39.931250  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.931275  642041 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0520 14:02:39.931288  642041 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0520 14:02:39.931294  642041 command_runner.go:130] >       ],
	I0520 14:02:39.931299  642041 command_runner.go:130] >       "size": "85933465",
	I0520 14:02:39.931305  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.931309  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.931315  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.931319  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.931325  642041 command_runner.go:130] >     },
	I0520 14:02:39.931329  642041 command_runner.go:130] >     {
	I0520 14:02:39.931337  642041 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0520 14:02:39.931344  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.931349  642041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0520 14:02:39.931354  642041 command_runner.go:130] >       ],
	I0520 14:02:39.931359  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.931368  642041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0520 14:02:39.931377  642041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0520 14:02:39.931383  642041 command_runner.go:130] >       ],
	I0520 14:02:39.931387  642041 command_runner.go:130] >       "size": "63026504",
	I0520 14:02:39.931394  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.931397  642041 command_runner.go:130] >         "value": "0"
	I0520 14:02:39.931402  642041 command_runner.go:130] >       },
	I0520 14:02:39.931405  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.931411  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.931424  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.931429  642041 command_runner.go:130] >     },
	I0520 14:02:39.931434  642041 command_runner.go:130] >     {
	I0520 14:02:39.931443  642041 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0520 14:02:39.931450  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.931457  642041 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0520 14:02:39.931462  642041 command_runner.go:130] >       ],
	I0520 14:02:39.931469  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.931481  642041 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0520 14:02:39.931492  642041 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0520 14:02:39.931495  642041 command_runner.go:130] >       ],
	I0520 14:02:39.931500  642041 command_runner.go:130] >       "size": "750414",
	I0520 14:02:39.931503  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.931507  642041 command_runner.go:130] >         "value": "65535"
	I0520 14:02:39.931510  642041 command_runner.go:130] >       },
	I0520 14:02:39.931517  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.931524  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.931532  642041 command_runner.go:130] >       "pinned": true
	I0520 14:02:39.931540  642041 command_runner.go:130] >     }
	I0520 14:02:39.931545  642041 command_runner.go:130] >   ]
	I0520 14:02:39.931549  642041 command_runner.go:130] > }
	I0520 14:02:39.931784  642041 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 14:02:39.931803  642041 cache_images.go:84] Images are preloaded, skipping loading
	I0520 14:02:39.931812  642041 kubeadm.go:928] updating node { 192.168.39.141 8443 v1.30.1 crio true true} ...
	I0520 14:02:39.931923  642041 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-114485 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-114485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 14:02:39.931996  642041 ssh_runner.go:195] Run: crio config
	I0520 14:02:39.966426  642041 command_runner.go:130] ! time="2024-05-20 14:02:39.943707117Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0520 14:02:39.973837  642041 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0520 14:02:39.981673  642041 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0520 14:02:39.981700  642041 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0520 14:02:39.981710  642041 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0520 14:02:39.981714  642041 command_runner.go:130] > #
	I0520 14:02:39.981724  642041 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0520 14:02:39.981730  642041 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0520 14:02:39.981737  642041 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0520 14:02:39.981743  642041 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0520 14:02:39.981749  642041 command_runner.go:130] > # reload'.
	I0520 14:02:39.981755  642041 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0520 14:02:39.981764  642041 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0520 14:02:39.981770  642041 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0520 14:02:39.981779  642041 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0520 14:02:39.981783  642041 command_runner.go:130] > [crio]
	I0520 14:02:39.981796  642041 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0520 14:02:39.981807  642041 command_runner.go:130] > # containers images, in this directory.
	I0520 14:02:39.981814  642041 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0520 14:02:39.981831  642041 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0520 14:02:39.981841  642041 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0520 14:02:39.981853  642041 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0520 14:02:39.981860  642041 command_runner.go:130] > # imagestore = ""
	I0520 14:02:39.981865  642041 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0520 14:02:39.981874  642041 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0520 14:02:39.981878  642041 command_runner.go:130] > storage_driver = "overlay"
	I0520 14:02:39.981887  642041 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0520 14:02:39.981896  642041 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0520 14:02:39.981906  642041 command_runner.go:130] > storage_option = [
	I0520 14:02:39.981913  642041 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0520 14:02:39.981925  642041 command_runner.go:130] > ]
	I0520 14:02:39.981939  642041 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0520 14:02:39.981951  642041 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0520 14:02:39.981961  642041 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0520 14:02:39.981968  642041 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0520 14:02:39.981975  642041 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0520 14:02:39.981979  642041 command_runner.go:130] > # always happen on a node reboot
	I0520 14:02:39.981984  642041 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0520 14:02:39.982001  642041 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0520 14:02:39.982015  642041 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0520 14:02:39.982028  642041 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0520 14:02:39.982040  642041 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0520 14:02:39.982054  642041 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0520 14:02:39.982069  642041 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0520 14:02:39.982078  642041 command_runner.go:130] > # internal_wipe = true
	I0520 14:02:39.982087  642041 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0520 14:02:39.982099  642041 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0520 14:02:39.982110  642041 command_runner.go:130] > # internal_repair = false
	I0520 14:02:39.982119  642041 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0520 14:02:39.982131  642041 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0520 14:02:39.982143  642041 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0520 14:02:39.982155  642041 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0520 14:02:39.982166  642041 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0520 14:02:39.982176  642041 command_runner.go:130] > [crio.api]
	I0520 14:02:39.982184  642041 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0520 14:02:39.982194  642041 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0520 14:02:39.982203  642041 command_runner.go:130] > # IP address on which the stream server will listen.
	I0520 14:02:39.982213  642041 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0520 14:02:39.982225  642041 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0520 14:02:39.982236  642041 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0520 14:02:39.982245  642041 command_runner.go:130] > # stream_port = "0"
	I0520 14:02:39.982254  642041 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0520 14:02:39.982261  642041 command_runner.go:130] > # stream_enable_tls = false
	I0520 14:02:39.982273  642041 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0520 14:02:39.982282  642041 command_runner.go:130] > # stream_idle_timeout = ""
	I0520 14:02:39.982288  642041 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0520 14:02:39.982298  642041 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0520 14:02:39.982303  642041 command_runner.go:130] > # minutes.
	I0520 14:02:39.982307  642041 command_runner.go:130] > # stream_tls_cert = ""
	I0520 14:02:39.982314  642041 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0520 14:02:39.982319  642041 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0520 14:02:39.982328  642041 command_runner.go:130] > # stream_tls_key = ""
	I0520 14:02:39.982334  642041 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0520 14:02:39.982342  642041 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0520 14:02:39.982357  642041 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0520 14:02:39.982363  642041 command_runner.go:130] > # stream_tls_ca = ""
	I0520 14:02:39.982370  642041 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0520 14:02:39.982377  642041 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0520 14:02:39.982383  642041 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0520 14:02:39.982390  642041 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0520 14:02:39.982396  642041 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0520 14:02:39.982402  642041 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0520 14:02:39.982405  642041 command_runner.go:130] > [crio.runtime]
	I0520 14:02:39.982411  642041 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0520 14:02:39.982419  642041 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0520 14:02:39.982422  642041 command_runner.go:130] > # "nofile=1024:2048"
	I0520 14:02:39.982431  642041 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0520 14:02:39.982434  642041 command_runner.go:130] > # default_ulimits = [
	I0520 14:02:39.982440  642041 command_runner.go:130] > # ]
	I0520 14:02:39.982446  642041 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0520 14:02:39.982452  642041 command_runner.go:130] > # no_pivot = false
	I0520 14:02:39.982457  642041 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0520 14:02:39.982465  642041 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0520 14:02:39.982470  642041 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0520 14:02:39.982478  642041 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0520 14:02:39.982485  642041 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0520 14:02:39.982493  642041 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0520 14:02:39.982497  642041 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0520 14:02:39.982504  642041 command_runner.go:130] > # Cgroup setting for conmon
	I0520 14:02:39.982510  642041 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0520 14:02:39.982516  642041 command_runner.go:130] > conmon_cgroup = "pod"
	I0520 14:02:39.982522  642041 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0520 14:02:39.982530  642041 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0520 14:02:39.982536  642041 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0520 14:02:39.982543  642041 command_runner.go:130] > conmon_env = [
	I0520 14:02:39.982549  642041 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0520 14:02:39.982554  642041 command_runner.go:130] > ]
	I0520 14:02:39.982559  642041 command_runner.go:130] > # Additional environment variables to set for all the
	I0520 14:02:39.982566  642041 command_runner.go:130] > # containers. These are overridden if set in the
	I0520 14:02:39.982571  642041 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0520 14:02:39.982577  642041 command_runner.go:130] > # default_env = [
	I0520 14:02:39.982580  642041 command_runner.go:130] > # ]
	I0520 14:02:39.982585  642041 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0520 14:02:39.982594  642041 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0520 14:02:39.982598  642041 command_runner.go:130] > # selinux = false
	I0520 14:02:39.982604  642041 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0520 14:02:39.982615  642041 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0520 14:02:39.982622  642041 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0520 14:02:39.982626  642041 command_runner.go:130] > # seccomp_profile = ""
	I0520 14:02:39.982632  642041 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0520 14:02:39.982640  642041 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0520 14:02:39.982646  642041 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0520 14:02:39.982652  642041 command_runner.go:130] > # which might increase security.
	I0520 14:02:39.982656  642041 command_runner.go:130] > # This option is currently deprecated,
	I0520 14:02:39.982665  642041 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0520 14:02:39.982669  642041 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0520 14:02:39.982676  642041 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0520 14:02:39.982683  642041 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0520 14:02:39.982690  642041 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0520 14:02:39.982696  642041 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0520 14:02:39.982700  642041 command_runner.go:130] > # This option supports live configuration reload.
	I0520 14:02:39.982704  642041 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0520 14:02:39.982710  642041 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0520 14:02:39.982716  642041 command_runner.go:130] > # the cgroup blockio controller.
	I0520 14:02:39.982720  642041 command_runner.go:130] > # blockio_config_file = ""
	I0520 14:02:39.982729  642041 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0520 14:02:39.982733  642041 command_runner.go:130] > # blockio parameters.
	I0520 14:02:39.982739  642041 command_runner.go:130] > # blockio_reload = false
	I0520 14:02:39.982746  642041 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0520 14:02:39.982751  642041 command_runner.go:130] > # irqbalance daemon.
	I0520 14:02:39.982756  642041 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0520 14:02:39.982764  642041 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0520 14:02:39.982770  642041 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0520 14:02:39.982778  642041 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0520 14:02:39.982784  642041 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0520 14:02:39.982792  642041 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0520 14:02:39.982797  642041 command_runner.go:130] > # This option supports live configuration reload.
	I0520 14:02:39.982801  642041 command_runner.go:130] > # rdt_config_file = ""
	I0520 14:02:39.982810  642041 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0520 14:02:39.982814  642041 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0520 14:02:39.982834  642041 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0520 14:02:39.982841  642041 command_runner.go:130] > # separate_pull_cgroup = ""
	I0520 14:02:39.982847  642041 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0520 14:02:39.982855  642041 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0520 14:02:39.982859  642041 command_runner.go:130] > # will be added.
	I0520 14:02:39.982864  642041 command_runner.go:130] > # default_capabilities = [
	I0520 14:02:39.982867  642041 command_runner.go:130] > # 	"CHOWN",
	I0520 14:02:39.982873  642041 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0520 14:02:39.982877  642041 command_runner.go:130] > # 	"FSETID",
	I0520 14:02:39.982880  642041 command_runner.go:130] > # 	"FOWNER",
	I0520 14:02:39.982884  642041 command_runner.go:130] > # 	"SETGID",
	I0520 14:02:39.982887  642041 command_runner.go:130] > # 	"SETUID",
	I0520 14:02:39.982891  642041 command_runner.go:130] > # 	"SETPCAP",
	I0520 14:02:39.982895  642041 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0520 14:02:39.982901  642041 command_runner.go:130] > # 	"KILL",
	I0520 14:02:39.982904  642041 command_runner.go:130] > # ]
	I0520 14:02:39.982915  642041 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0520 14:02:39.982928  642041 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0520 14:02:39.982937  642041 command_runner.go:130] > # add_inheritable_capabilities = false
	I0520 14:02:39.982948  642041 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0520 14:02:39.982960  642041 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0520 14:02:39.982969  642041 command_runner.go:130] > default_sysctls = [
	I0520 14:02:39.982976  642041 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0520 14:02:39.982980  642041 command_runner.go:130] > ]
	I0520 14:02:39.982987  642041 command_runner.go:130] > # List of devices on the host that a
	I0520 14:02:39.982998  642041 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0520 14:02:39.983004  642041 command_runner.go:130] > # allowed_devices = [
	I0520 14:02:39.983008  642041 command_runner.go:130] > # 	"/dev/fuse",
	I0520 14:02:39.983012  642041 command_runner.go:130] > # ]
	I0520 14:02:39.983017  642041 command_runner.go:130] > # List of additional devices. specified as
	I0520 14:02:39.983033  642041 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0520 14:02:39.983040  642041 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0520 14:02:39.983046  642041 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0520 14:02:39.983053  642041 command_runner.go:130] > # additional_devices = [
	I0520 14:02:39.983056  642041 command_runner.go:130] > # ]
	I0520 14:02:39.983064  642041 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0520 14:02:39.983068  642041 command_runner.go:130] > # cdi_spec_dirs = [
	I0520 14:02:39.983072  642041 command_runner.go:130] > # 	"/etc/cdi",
	I0520 14:02:39.983076  642041 command_runner.go:130] > # 	"/var/run/cdi",
	I0520 14:02:39.983082  642041 command_runner.go:130] > # ]
	I0520 14:02:39.983088  642041 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0520 14:02:39.983096  642041 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0520 14:02:39.983099  642041 command_runner.go:130] > # Defaults to false.
	I0520 14:02:39.983106  642041 command_runner.go:130] > # device_ownership_from_security_context = false
	I0520 14:02:39.983112  642041 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0520 14:02:39.983120  642041 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0520 14:02:39.983124  642041 command_runner.go:130] > # hooks_dir = [
	I0520 14:02:39.983130  642041 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0520 14:02:39.983134  642041 command_runner.go:130] > # ]
	I0520 14:02:39.983140  642041 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0520 14:02:39.983148  642041 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0520 14:02:39.983153  642041 command_runner.go:130] > # its default mounts from the following two files:
	I0520 14:02:39.983156  642041 command_runner.go:130] > #
	I0520 14:02:39.983164  642041 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0520 14:02:39.983177  642041 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0520 14:02:39.983189  642041 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0520 14:02:39.983198  642041 command_runner.go:130] > #
	I0520 14:02:39.983207  642041 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0520 14:02:39.983220  642041 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0520 14:02:39.983233  642041 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0520 14:02:39.983247  642041 command_runner.go:130] > #      only add mounts it finds in this file.
	I0520 14:02:39.983255  642041 command_runner.go:130] > #
	I0520 14:02:39.983262  642041 command_runner.go:130] > # default_mounts_file = ""
	I0520 14:02:39.983273  642041 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0520 14:02:39.983282  642041 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0520 14:02:39.983289  642041 command_runner.go:130] > pids_limit = 1024
	I0520 14:02:39.983295  642041 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0520 14:02:39.983303  642041 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0520 14:02:39.983310  642041 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0520 14:02:39.983320  642041 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0520 14:02:39.983326  642041 command_runner.go:130] > # log_size_max = -1
	I0520 14:02:39.983333  642041 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0520 14:02:39.983337  642041 command_runner.go:130] > # log_to_journald = false
	I0520 14:02:39.983344  642041 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0520 14:02:39.983351  642041 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0520 14:02:39.983356  642041 command_runner.go:130] > # Path to directory for container attach sockets.
	I0520 14:02:39.983363  642041 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0520 14:02:39.983369  642041 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0520 14:02:39.983375  642041 command_runner.go:130] > # bind_mount_prefix = ""
	I0520 14:02:39.983380  642041 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0520 14:02:39.983386  642041 command_runner.go:130] > # read_only = false
	I0520 14:02:39.983392  642041 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0520 14:02:39.983400  642041 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0520 14:02:39.983404  642041 command_runner.go:130] > # live configuration reload.
	I0520 14:02:39.983408  642041 command_runner.go:130] > # log_level = "info"
	I0520 14:02:39.983414  642041 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0520 14:02:39.983421  642041 command_runner.go:130] > # This option supports live configuration reload.
	I0520 14:02:39.983424  642041 command_runner.go:130] > # log_filter = ""
	I0520 14:02:39.983435  642041 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0520 14:02:39.983445  642041 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0520 14:02:39.983449  642041 command_runner.go:130] > # separated by comma.
	I0520 14:02:39.983456  642041 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 14:02:39.983462  642041 command_runner.go:130] > # uid_mappings = ""
	I0520 14:02:39.983468  642041 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0520 14:02:39.983477  642041 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0520 14:02:39.983481  642041 command_runner.go:130] > # separated by comma.
	I0520 14:02:39.983490  642041 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 14:02:39.983496  642041 command_runner.go:130] > # gid_mappings = ""
	I0520 14:02:39.983502  642041 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0520 14:02:39.983510  642041 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0520 14:02:39.983516  642041 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0520 14:02:39.983526  642041 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 14:02:39.983530  642041 command_runner.go:130] > # minimum_mappable_uid = -1
	I0520 14:02:39.983539  642041 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0520 14:02:39.983545  642041 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0520 14:02:39.983553  642041 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0520 14:02:39.983560  642041 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 14:02:39.983567  642041 command_runner.go:130] > # minimum_mappable_gid = -1
	I0520 14:02:39.983573  642041 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0520 14:02:39.983578  642041 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0520 14:02:39.983586  642041 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0520 14:02:39.983590  642041 command_runner.go:130] > # ctr_stop_timeout = 30
	I0520 14:02:39.983597  642041 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0520 14:02:39.983603  642041 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0520 14:02:39.983610  642041 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0520 14:02:39.983615  642041 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0520 14:02:39.983619  642041 command_runner.go:130] > drop_infra_ctr = false
	I0520 14:02:39.983625  642041 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0520 14:02:39.983633  642041 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0520 14:02:39.983640  642041 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0520 14:02:39.983646  642041 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0520 14:02:39.983652  642041 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0520 14:02:39.983659  642041 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0520 14:02:39.983665  642041 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0520 14:02:39.983669  642041 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0520 14:02:39.983677  642041 command_runner.go:130] > # shared_cpuset = ""
	I0520 14:02:39.983682  642041 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0520 14:02:39.983689  642041 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0520 14:02:39.983693  642041 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0520 14:02:39.983702  642041 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0520 14:02:39.983706  642041 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0520 14:02:39.983714  642041 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0520 14:02:39.983721  642041 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0520 14:02:39.983728  642041 command_runner.go:130] > # enable_criu_support = false
	I0520 14:02:39.983732  642041 command_runner.go:130] > # Enable/disable the generation of the container,
	I0520 14:02:39.983740  642041 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0520 14:02:39.983744  642041 command_runner.go:130] > # enable_pod_events = false
	I0520 14:02:39.983753  642041 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0520 14:02:39.983759  642041 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0520 14:02:39.983766  642041 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0520 14:02:39.983770  642041 command_runner.go:130] > # default_runtime = "runc"
	I0520 14:02:39.983777  642041 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0520 14:02:39.983785  642041 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0520 14:02:39.983796  642041 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0520 14:02:39.983804  642041 command_runner.go:130] > # creation as a file is not desired either.
	I0520 14:02:39.983812  642041 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0520 14:02:39.983819  642041 command_runner.go:130] > # the hostname is being managed dynamically.
	I0520 14:02:39.983823  642041 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0520 14:02:39.983827  642041 command_runner.go:130] > # ]
	I0520 14:02:39.983833  642041 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0520 14:02:39.983842  642041 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0520 14:02:39.983848  642041 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0520 14:02:39.983855  642041 command_runner.go:130] > # Each entry in the table should follow the format:
	I0520 14:02:39.983858  642041 command_runner.go:130] > #
	I0520 14:02:39.983865  642041 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0520 14:02:39.983870  642041 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0520 14:02:39.983899  642041 command_runner.go:130] > # runtime_type = "oci"
	I0520 14:02:39.983908  642041 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0520 14:02:39.983912  642041 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0520 14:02:39.983916  642041 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0520 14:02:39.983920  642041 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0520 14:02:39.983924  642041 command_runner.go:130] > # monitor_env = []
	I0520 14:02:39.983929  642041 command_runner.go:130] > # privileged_without_host_devices = false
	I0520 14:02:39.983935  642041 command_runner.go:130] > # allowed_annotations = []
	I0520 14:02:39.983940  642041 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0520 14:02:39.983946  642041 command_runner.go:130] > # Where:
	I0520 14:02:39.983951  642041 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0520 14:02:39.983960  642041 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0520 14:02:39.983967  642041 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0520 14:02:39.983975  642041 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0520 14:02:39.983980  642041 command_runner.go:130] > #   in $PATH.
	I0520 14:02:39.983987  642041 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0520 14:02:39.983992  642041 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0520 14:02:39.984000  642041 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0520 14:02:39.984004  642041 command_runner.go:130] > #   state.
	I0520 14:02:39.984009  642041 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0520 14:02:39.984015  642041 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0520 14:02:39.984025  642041 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0520 14:02:39.984033  642041 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0520 14:02:39.984039  642041 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0520 14:02:39.984048  642041 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0520 14:02:39.984052  642041 command_runner.go:130] > #   The currently recognized values are:
	I0520 14:02:39.984058  642041 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0520 14:02:39.984068  642041 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0520 14:02:39.984074  642041 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0520 14:02:39.984082  642041 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0520 14:02:39.984088  642041 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0520 14:02:39.984097  642041 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0520 14:02:39.984104  642041 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0520 14:02:39.984112  642041 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0520 14:02:39.984118  642041 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0520 14:02:39.984126  642041 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0520 14:02:39.984133  642041 command_runner.go:130] > #   deprecated option "conmon".
	I0520 14:02:39.984142  642041 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0520 14:02:39.984146  642041 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0520 14:02:39.984154  642041 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0520 14:02:39.984159  642041 command_runner.go:130] > #   should be moved to the container's cgroup
	I0520 14:02:39.984171  642041 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0520 14:02:39.984182  642041 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0520 14:02:39.984193  642041 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0520 14:02:39.984205  642041 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0520 14:02:39.984212  642041 command_runner.go:130] > #
	I0520 14:02:39.984220  642041 command_runner.go:130] > # Using the seccomp notifier feature:
	I0520 14:02:39.984229  642041 command_runner.go:130] > #
	I0520 14:02:39.984240  642041 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0520 14:02:39.984251  642041 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0520 14:02:39.984254  642041 command_runner.go:130] > #
	I0520 14:02:39.984260  642041 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0520 14:02:39.984269  642041 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0520 14:02:39.984272  642041 command_runner.go:130] > #
	I0520 14:02:39.984282  642041 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0520 14:02:39.984286  642041 command_runner.go:130] > # feature.
	I0520 14:02:39.984291  642041 command_runner.go:130] > #
	I0520 14:02:39.984297  642041 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0520 14:02:39.984304  642041 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0520 14:02:39.984311  642041 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0520 14:02:39.984318  642041 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0520 14:02:39.984324  642041 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0520 14:02:39.984330  642041 command_runner.go:130] > #
	I0520 14:02:39.984335  642041 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0520 14:02:39.984343  642041 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0520 14:02:39.984346  642041 command_runner.go:130] > #
	I0520 14:02:39.984352  642041 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0520 14:02:39.984360  642041 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0520 14:02:39.984363  642041 command_runner.go:130] > #
	I0520 14:02:39.984369  642041 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0520 14:02:39.984377  642041 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0520 14:02:39.984381  642041 command_runner.go:130] > # limitation.
	I0520 14:02:39.984388  642041 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0520 14:02:39.984392  642041 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0520 14:02:39.984398  642041 command_runner.go:130] > runtime_type = "oci"
	I0520 14:02:39.984402  642041 command_runner.go:130] > runtime_root = "/run/runc"
	I0520 14:02:39.984408  642041 command_runner.go:130] > runtime_config_path = ""
	I0520 14:02:39.984413  642041 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0520 14:02:39.984421  642041 command_runner.go:130] > monitor_cgroup = "pod"
	I0520 14:02:39.984425  642041 command_runner.go:130] > monitor_exec_cgroup = ""
	I0520 14:02:39.984431  642041 command_runner.go:130] > monitor_env = [
	I0520 14:02:39.984437  642041 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0520 14:02:39.984443  642041 command_runner.go:130] > ]
	I0520 14:02:39.984448  642041 command_runner.go:130] > privileged_without_host_devices = false
	I0520 14:02:39.984457  642041 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0520 14:02:39.984462  642041 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0520 14:02:39.984471  642041 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0520 14:02:39.984478  642041 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0520 14:02:39.984488  642041 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0520 14:02:39.984496  642041 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0520 14:02:39.984504  642041 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0520 14:02:39.984513  642041 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0520 14:02:39.984520  642041 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0520 14:02:39.984529  642041 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0520 14:02:39.984532  642041 command_runner.go:130] > # Example:
	I0520 14:02:39.984539  642041 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0520 14:02:39.984544  642041 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0520 14:02:39.984549  642041 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0520 14:02:39.984554  642041 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0520 14:02:39.984559  642041 command_runner.go:130] > # cpuset = 0
	I0520 14:02:39.984563  642041 command_runner.go:130] > # cpushares = "0-1"
	I0520 14:02:39.984566  642041 command_runner.go:130] > # Where:
	I0520 14:02:39.984574  642041 command_runner.go:130] > # The workload name is workload-type.
	I0520 14:02:39.984581  642041 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0520 14:02:39.984588  642041 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0520 14:02:39.984594  642041 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0520 14:02:39.984602  642041 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0520 14:02:39.984607  642041 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0520 14:02:39.984615  642041 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0520 14:02:39.984621  642041 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0520 14:02:39.984627  642041 command_runner.go:130] > # Default value is set to true
	I0520 14:02:39.984632  642041 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0520 14:02:39.984637  642041 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0520 14:02:39.984641  642041 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0520 14:02:39.984645  642041 command_runner.go:130] > # Default value is set to 'false'
	I0520 14:02:39.984649  642041 command_runner.go:130] > # disable_hostport_mapping = false
	I0520 14:02:39.984654  642041 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0520 14:02:39.984657  642041 command_runner.go:130] > #
	I0520 14:02:39.984663  642041 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0520 14:02:39.984668  642041 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0520 14:02:39.984674  642041 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0520 14:02:39.984680  642041 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0520 14:02:39.984685  642041 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0520 14:02:39.984688  642041 command_runner.go:130] > [crio.image]
	I0520 14:02:39.984693  642041 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0520 14:02:39.984697  642041 command_runner.go:130] > # default_transport = "docker://"
	I0520 14:02:39.984703  642041 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0520 14:02:39.984709  642041 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0520 14:02:39.984712  642041 command_runner.go:130] > # global_auth_file = ""
	I0520 14:02:39.984717  642041 command_runner.go:130] > # The image used to instantiate infra containers.
	I0520 14:02:39.984721  642041 command_runner.go:130] > # This option supports live configuration reload.
	I0520 14:02:39.984725  642041 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0520 14:02:39.984731  642041 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0520 14:02:39.984736  642041 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0520 14:02:39.984740  642041 command_runner.go:130] > # This option supports live configuration reload.
	I0520 14:02:39.984744  642041 command_runner.go:130] > # pause_image_auth_file = ""
	I0520 14:02:39.984749  642041 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0520 14:02:39.984754  642041 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0520 14:02:39.984759  642041 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0520 14:02:39.984764  642041 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0520 14:02:39.984768  642041 command_runner.go:130] > # pause_command = "/pause"
	I0520 14:02:39.984773  642041 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0520 14:02:39.984778  642041 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0520 14:02:39.984784  642041 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0520 14:02:39.984790  642041 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0520 14:02:39.984795  642041 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0520 14:02:39.984800  642041 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0520 14:02:39.984804  642041 command_runner.go:130] > # pinned_images = [
	I0520 14:02:39.984812  642041 command_runner.go:130] > # ]
	I0520 14:02:39.984817  642041 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0520 14:02:39.984823  642041 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0520 14:02:39.984829  642041 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0520 14:02:39.984834  642041 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0520 14:02:39.984839  642041 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0520 14:02:39.984842  642041 command_runner.go:130] > # signature_policy = ""
	I0520 14:02:39.984847  642041 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0520 14:02:39.984853  642041 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0520 14:02:39.984859  642041 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0520 14:02:39.984865  642041 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0520 14:02:39.984870  642041 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0520 14:02:39.984877  642041 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0520 14:02:39.984883  642041 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0520 14:02:39.984892  642041 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0520 14:02:39.984895  642041 command_runner.go:130] > # changing them here.
	I0520 14:02:39.984902  642041 command_runner.go:130] > # insecure_registries = [
	I0520 14:02:39.984905  642041 command_runner.go:130] > # ]
	I0520 14:02:39.984911  642041 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0520 14:02:39.984917  642041 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0520 14:02:39.984921  642041 command_runner.go:130] > # image_volumes = "mkdir"
	I0520 14:02:39.984925  642041 command_runner.go:130] > # Temporary directory to use for storing big files
	I0520 14:02:39.984931  642041 command_runner.go:130] > # big_files_temporary_dir = ""
	I0520 14:02:39.984937  642041 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0520 14:02:39.984940  642041 command_runner.go:130] > # CNI plugins.
	I0520 14:02:39.984944  642041 command_runner.go:130] > [crio.network]
	I0520 14:02:39.984949  642041 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0520 14:02:39.984959  642041 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0520 14:02:39.984963  642041 command_runner.go:130] > # cni_default_network = ""
	I0520 14:02:39.984968  642041 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0520 14:02:39.984975  642041 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0520 14:02:39.984980  642041 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0520 14:02:39.984986  642041 command_runner.go:130] > # plugin_dirs = [
	I0520 14:02:39.984989  642041 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0520 14:02:39.984992  642041 command_runner.go:130] > # ]
	I0520 14:02:39.984998  642041 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0520 14:02:39.985003  642041 command_runner.go:130] > [crio.metrics]
	I0520 14:02:39.985008  642041 command_runner.go:130] > # Globally enable or disable metrics support.
	I0520 14:02:39.985011  642041 command_runner.go:130] > enable_metrics = true
	I0520 14:02:39.985018  642041 command_runner.go:130] > # Specify enabled metrics collectors.
	I0520 14:02:39.985026  642041 command_runner.go:130] > # Per default all metrics are enabled.
	I0520 14:02:39.985034  642041 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0520 14:02:39.985040  642041 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0520 14:02:39.985048  642041 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0520 14:02:39.985053  642041 command_runner.go:130] > # metrics_collectors = [
	I0520 14:02:39.985059  642041 command_runner.go:130] > # 	"operations",
	I0520 14:02:39.985063  642041 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0520 14:02:39.985067  642041 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0520 14:02:39.985071  642041 command_runner.go:130] > # 	"operations_errors",
	I0520 14:02:39.985077  642041 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0520 14:02:39.985081  642041 command_runner.go:130] > # 	"image_pulls_by_name",
	I0520 14:02:39.985085  642041 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0520 14:02:39.985092  642041 command_runner.go:130] > # 	"image_pulls_failures",
	I0520 14:02:39.985096  642041 command_runner.go:130] > # 	"image_pulls_successes",
	I0520 14:02:39.985101  642041 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0520 14:02:39.985106  642041 command_runner.go:130] > # 	"image_layer_reuse",
	I0520 14:02:39.985113  642041 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0520 14:02:39.985117  642041 command_runner.go:130] > # 	"containers_oom_total",
	I0520 14:02:39.985123  642041 command_runner.go:130] > # 	"containers_oom",
	I0520 14:02:39.985126  642041 command_runner.go:130] > # 	"processes_defunct",
	I0520 14:02:39.985132  642041 command_runner.go:130] > # 	"operations_total",
	I0520 14:02:39.985137  642041 command_runner.go:130] > # 	"operations_latency_seconds",
	I0520 14:02:39.985141  642041 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0520 14:02:39.985145  642041 command_runner.go:130] > # 	"operations_errors_total",
	I0520 14:02:39.985150  642041 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0520 14:02:39.985155  642041 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0520 14:02:39.985162  642041 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0520 14:02:39.985166  642041 command_runner.go:130] > # 	"image_pulls_success_total",
	I0520 14:02:39.985169  642041 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0520 14:02:39.985176  642041 command_runner.go:130] > # 	"containers_oom_count_total",
	I0520 14:02:39.985180  642041 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0520 14:02:39.985187  642041 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0520 14:02:39.985190  642041 command_runner.go:130] > # ]
	I0520 14:02:39.985197  642041 command_runner.go:130] > # The port on which the metrics server will listen.
	I0520 14:02:39.985201  642041 command_runner.go:130] > # metrics_port = 9090
	I0520 14:02:39.985205  642041 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0520 14:02:39.985209  642041 command_runner.go:130] > # metrics_socket = ""
	I0520 14:02:39.985216  642041 command_runner.go:130] > # The certificate for the secure metrics server.
	I0520 14:02:39.985228  642041 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0520 14:02:39.985238  642041 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0520 14:02:39.985262  642041 command_runner.go:130] > # certificate on any modification event.
	I0520 14:02:39.985272  642041 command_runner.go:130] > # metrics_cert = ""
	I0520 14:02:39.985281  642041 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0520 14:02:39.985292  642041 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0520 14:02:39.985301  642041 command_runner.go:130] > # metrics_key = ""
	I0520 14:02:39.985312  642041 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0520 14:02:39.985321  642041 command_runner.go:130] > [crio.tracing]
	I0520 14:02:39.985326  642041 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0520 14:02:39.985330  642041 command_runner.go:130] > # enable_tracing = false
	I0520 14:02:39.985336  642041 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0520 14:02:39.985342  642041 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0520 14:02:39.985349  642041 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0520 14:02:39.985354  642041 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0520 14:02:39.985358  642041 command_runner.go:130] > # CRI-O NRI configuration.
	I0520 14:02:39.985363  642041 command_runner.go:130] > [crio.nri]
	I0520 14:02:39.985367  642041 command_runner.go:130] > # Globally enable or disable NRI.
	I0520 14:02:39.985373  642041 command_runner.go:130] > # enable_nri = false
	I0520 14:02:39.985377  642041 command_runner.go:130] > # NRI socket to listen on.
	I0520 14:02:39.985383  642041 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0520 14:02:39.985388  642041 command_runner.go:130] > # NRI plugin directory to use.
	I0520 14:02:39.985395  642041 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0520 14:02:39.985400  642041 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0520 14:02:39.985407  642041 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0520 14:02:39.985412  642041 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0520 14:02:39.985419  642041 command_runner.go:130] > # nri_disable_connections = false
	I0520 14:02:39.985424  642041 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0520 14:02:39.985429  642041 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0520 14:02:39.985435  642041 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0520 14:02:39.985442  642041 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0520 14:02:39.985447  642041 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0520 14:02:39.985451  642041 command_runner.go:130] > [crio.stats]
	I0520 14:02:39.985458  642041 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0520 14:02:39.985463  642041 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0520 14:02:39.985468  642041 command_runner.go:130] > # stats_collection_period = 0
	I0520 14:02:39.985629  642041 cni.go:84] Creating CNI manager for ""
	I0520 14:02:39.985643  642041 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0520 14:02:39.985662  642041 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 14:02:39.985686  642041 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.141 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-114485 NodeName:multinode-114485 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 14:02:39.985819  642041 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-114485"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 14:02:39.985883  642041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 14:02:39.995744  642041 command_runner.go:130] > kubeadm
	I0520 14:02:39.995772  642041 command_runner.go:130] > kubectl
	I0520 14:02:39.995778  642041 command_runner.go:130] > kubelet
	I0520 14:02:39.995821  642041 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 14:02:39.995887  642041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 14:02:40.005600  642041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0520 14:02:40.023112  642041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 14:02:40.039646  642041 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0520 14:02:40.055898  642041 ssh_runner.go:195] Run: grep 192.168.39.141	control-plane.minikube.internal$ /etc/hosts
	I0520 14:02:40.059544  642041 command_runner.go:130] > 192.168.39.141	control-plane.minikube.internal
	I0520 14:02:40.059615  642041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:02:40.205637  642041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 14:02:40.222709  642041 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485 for IP: 192.168.39.141
	I0520 14:02:40.222738  642041 certs.go:194] generating shared ca certs ...
	I0520 14:02:40.222760  642041 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:02:40.222947  642041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 14:02:40.223019  642041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 14:02:40.223051  642041 certs.go:256] generating profile certs ...
	I0520 14:02:40.223167  642041 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/client.key
	I0520 14:02:40.223242  642041 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/apiserver.key.1cd91c1b
	I0520 14:02:40.223303  642041 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/proxy-client.key
	I0520 14:02:40.223318  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 14:02:40.223333  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 14:02:40.223350  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 14:02:40.223366  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 14:02:40.223383  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 14:02:40.223409  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 14:02:40.223425  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 14:02:40.223441  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 14:02:40.223505  642041 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem (1338 bytes)
	W0520 14:02:40.223541  642041 certs.go:480] ignoring /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867_empty.pem, impossibly tiny 0 bytes
	I0520 14:02:40.223556  642041 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 14:02:40.223585  642041 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 14:02:40.223616  642041 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 14:02:40.223649  642041 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 14:02:40.223698  642041 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 14:02:40.223735  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /usr/share/ca-certificates/6098672.pem
	I0520 14:02:40.223753  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:02:40.223770  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem -> /usr/share/ca-certificates/609867.pem
	I0520 14:02:40.224643  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 14:02:40.250630  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 14:02:40.273706  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 14:02:40.296666  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 14:02:40.319223  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 14:02:40.342137  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 14:02:40.364834  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 14:02:40.387371  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 14:02:40.409308  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /usr/share/ca-certificates/6098672.pem (1708 bytes)
	I0520 14:02:40.431654  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 14:02:40.453720  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem --> /usr/share/ca-certificates/609867.pem (1338 bytes)
	I0520 14:02:40.475773  642041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 14:02:40.491827  642041 ssh_runner.go:195] Run: openssl version
	I0520 14:02:40.497259  642041 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 14:02:40.497347  642041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6098672.pem && ln -fs /usr/share/ca-certificates/6098672.pem /etc/ssl/certs/6098672.pem"
	I0520 14:02:40.507160  642041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6098672.pem
	I0520 14:02:40.511226  642041 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 14:02:40.511426  642041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 14:02:40.511485  642041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6098672.pem
	I0520 14:02:40.516633  642041 command_runner.go:130] > 3ec20f2e
	I0520 14:02:40.516749  642041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6098672.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 14:02:40.525859  642041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 14:02:40.536264  642041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:02:40.540427  642041 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:02:40.540470  642041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:02:40.540532  642041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:02:40.545744  642041 command_runner.go:130] > b5213941
	I0520 14:02:40.545835  642041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 14:02:40.555089  642041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/609867.pem && ln -fs /usr/share/ca-certificates/609867.pem /etc/ssl/certs/609867.pem"
	I0520 14:02:40.565731  642041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/609867.pem
	I0520 14:02:40.569779  642041 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 14:02:40.569965  642041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 14:02:40.570020  642041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/609867.pem
	I0520 14:02:40.575155  642041 command_runner.go:130] > 51391683
	I0520 14:02:40.575278  642041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/609867.pem /etc/ssl/certs/51391683.0"
	I0520 14:02:40.584461  642041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 14:02:40.588716  642041 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 14:02:40.588747  642041 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0520 14:02:40.588756  642041 command_runner.go:130] > Device: 253,1	Inode: 5245462     Links: 1
	I0520 14:02:40.588765  642041 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 14:02:40.588773  642041 command_runner.go:130] > Access: 2024-05-20 13:56:30.849266718 +0000
	I0520 14:02:40.588787  642041 command_runner.go:130] > Modify: 2024-05-20 13:56:30.849266718 +0000
	I0520 14:02:40.588799  642041 command_runner.go:130] > Change: 2024-05-20 13:56:30.849266718 +0000
	I0520 14:02:40.588807  642041 command_runner.go:130] >  Birth: 2024-05-20 13:56:30.849266718 +0000
	I0520 14:02:40.588869  642041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 14:02:40.594128  642041 command_runner.go:130] > Certificate will not expire
	I0520 14:02:40.594308  642041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 14:02:40.599424  642041 command_runner.go:130] > Certificate will not expire
	I0520 14:02:40.599573  642041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 14:02:40.604801  642041 command_runner.go:130] > Certificate will not expire
	I0520 14:02:40.604871  642041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 14:02:40.610006  642041 command_runner.go:130] > Certificate will not expire
	I0520 14:02:40.610177  642041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 14:02:40.615543  642041 command_runner.go:130] > Certificate will not expire
	I0520 14:02:40.615613  642041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 14:02:40.620802  642041 command_runner.go:130] > Certificate will not expire
	I0520 14:02:40.620876  642041 kubeadm.go:391] StartCluster: {Name:multinode-114485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-114485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:02:40.620994  642041 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 14:02:40.621074  642041 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 14:02:40.661215  642041 command_runner.go:130] > 1ff5148b0c6ef582137978525a24e4eb722c2f50fe7c0bce9bc4577c82911ed7
	I0520 14:02:40.661240  642041 command_runner.go:130] > a6d4b37910552f514029a560da97b4d30702e290efd8222ae7d93f001e8fc6cd
	I0520 14:02:40.661263  642041 command_runner.go:130] > 40573632694f53397011e5740b786034914eacd4130a49af46d4dd3552733834
	I0520 14:02:40.661269  642041 command_runner.go:130] > 402715f11c169f573b8dbc6e0a4941404dcb6550b6dcfd79e497161a808b6c3a
	I0520 14:02:40.661274  642041 command_runner.go:130] > b880898a006546d6c8d22ee52fe792bbba8137e189a7b8d2ed95c35965a308be
	I0520 14:02:40.661279  642041 command_runner.go:130] > 68b22a0039a12e8d4779947ae72bff4b25bce62b4a69f504b852f52ac295d5a2
	I0520 14:02:40.661284  642041 command_runner.go:130] > 724a0f328d82967c2d2ac56e446c78255da4665ff63ffeb7b0c53a553f1a5eea
	I0520 14:02:40.661314  642041 command_runner.go:130] > 08459b873db5fd2313db35ebf67b4d487eac4df351de620a38af8b2d730ee07c
	I0520 14:02:40.661338  642041 cri.go:89] found id: "1ff5148b0c6ef582137978525a24e4eb722c2f50fe7c0bce9bc4577c82911ed7"
	I0520 14:02:40.661346  642041 cri.go:89] found id: "a6d4b37910552f514029a560da97b4d30702e290efd8222ae7d93f001e8fc6cd"
	I0520 14:02:40.661349  642041 cri.go:89] found id: "40573632694f53397011e5740b786034914eacd4130a49af46d4dd3552733834"
	I0520 14:02:40.661355  642041 cri.go:89] found id: "402715f11c169f573b8dbc6e0a4941404dcb6550b6dcfd79e497161a808b6c3a"
	I0520 14:02:40.661358  642041 cri.go:89] found id: "b880898a006546d6c8d22ee52fe792bbba8137e189a7b8d2ed95c35965a308be"
	I0520 14:02:40.661361  642041 cri.go:89] found id: "68b22a0039a12e8d4779947ae72bff4b25bce62b4a69f504b852f52ac295d5a2"
	I0520 14:02:40.661366  642041 cri.go:89] found id: "724a0f328d82967c2d2ac56e446c78255da4665ff63ffeb7b0c53a553f1a5eea"
	I0520 14:02:40.661369  642041 cri.go:89] found id: "08459b873db5fd2313db35ebf67b4d487eac4df351de620a38af8b2d730ee07c"
	I0520 14:02:40.661371  642041 cri.go:89] found id: ""
	I0520 14:02:40.661414  642041 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.538435329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716213845538414498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3711fe65-ec8b-4537-b72c-c5058e2b2e82 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.539025892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be3ef95e-a15a-4eb0-8eb0-6b0d5c57a5fd name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.539079479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be3ef95e-a15a-4eb0-8eb0-6b0d5c57a5fd name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.539407329Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b5aa659743d06e72e553df5fb166941a4a09164354a9aba9ebd6df158b0a2c0,PodSandboxId:f77cab11bbedc326759c97360323edea68d34cbe5ac03da2ec3c5c1f43feafc4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716213800395288528,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,},Annotations:map[string]string{io.kubernetes.container.hash: 106b706f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df75c1bbda42f260827efa8676d14fa9eddbe9b2816ce1febe4c9dd46115eb19,PodSandboxId:e11526823350804b06706aa9ef92e5212c4b01bfd21f0fee70b1424c6057fc0a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716213766873056977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 5212307c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10846343b933740c1503c452acbbf4b4d1d8fc8079f65473c0596dcba48576f7,PodSandboxId:2c7c485c8e9047928b532493574043ec610fbc5b67a00c0f500d5b8a7b9195aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716213766819036127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,},Annotations:map[string]string{io.kubernetes.container.hash: 1473217a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:222a378af63cfd0b3dd2634d41788c29fe8273a65a7a6c03ad3324256eda986f,PodSandboxId:e877889b86a8f9ff3e23838b21889a311bf9e8fe40aa6b7d8d325fae4449ea40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716213766724036277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed-3115f4ae124d,},Annotations:map[string]
string{io.kubernetes.container.hash: 10056e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8aa664fbcdbdad4bcd080fd54d7101b141b67f9fb50c0a61a0bd40d1cb24878,PodSandboxId:52ff335575c31b24def77852dcaed5194fa475f155e6d03b761c5604fe76ca0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716213766734459135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{io.ku
bernetes.container.hash: d1a502da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552538ae34f5ba25a3d81e16bb92d73f926891c616aba284f99dc8383269059d,PodSandboxId:e8eb7950cef0cb0f3b440962fa55845ffe4199d9e308f13b03b77d8b892baab0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716213762837851891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47a632823b64cd2a6a6a5667d9594e7369a7385699677aed10c88fcce49c6a1,PodSandboxId:1519ac6f1d6decbe68b1ba789182ffdcf51589bb058b0f4ad596b894c06ce5b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716213762839873064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,},Annotations:map[string]string{io.kub
ernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ba8ed72509ec2cf2738715a3ae02d183f652a0fdf046452899b1ca36c8d1519,PodSandboxId:995ff54780bf97e426d8e6a022d196bdf501017a8d001fbce488f37e4d44fe88,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716213762787837814,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,},Annotations:map[string]string{io.kubernetes.container.hash: 7423fb5c,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0950bfbde431e07391bb72043d6a6f3e1141698eea80fe52436c452f26a03cd,PodSandboxId:6be4e58e0ac0ab92c7c2aa004e6cb1a62a6ba003171818845b731b483ff3db95,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716213762748483252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,},Annotations:map[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a65b004f5928e7c57c08a060d3b21ebfa7b51556b043dfb32d65be7c9f2ad0b,PodSandboxId:4b8717ffa777a554895361c3d98768f45d847a5f60a70427644ebadb7ad8e183,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716213462731382845,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,},Annotations:map[string]string{io.kubernetes.container.hash: 106b706f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ff5148b0c6ef582137978525a24e4eb722c2f50fe7c0bce9bc4577c82911ed7,PodSandboxId:0a83d9634f503160df44dbfec482c89bc010bc1fba09e4f243dbb0734139ff65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716213416445134004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,},Annotations:map[string]string{io.kubernetes.container.hash: 1473217a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d4b37910552f514029a560da97b4d30702e290efd8222ae7d93f001e8fc6cd,PodSandboxId:b5fc4aba67a0c2a17c5fb357f0dea7c3d48c2256f81ef3528bc056471785bdb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716213416357973958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{io.kubernetes.container.hash: d1a502da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40573632694f53397011e5740b786034914eacd4130a49af46d4dd3552733834,PodSandboxId:7cab01574fea9f8306f5099eca8737a6b7511280f1111c11f4e58c7159f6c7d0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716213414025508308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 5212307c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402715f11c169f573b8dbc6e0a4941404dcb6550b6dcfd79e497161a808b6c3a,PodSandboxId:a10d72941c5420edc8219ac630212eaf4276d7815ff265ce0015268ace796849,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716213413772333796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed
-3115f4ae124d,},Annotations:map[string]string{io.kubernetes.container.hash: 10056e2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b880898a006546d6c8d22ee52fe792bbba8137e189a7b8d2ed95c35965a308be,PodSandboxId:399f171341d410bf6c0e8abc16be9a017b0059f237940969ee1ebf4782d5c399,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716213394300306427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,}
,Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68b22a0039a12e8d4779947ae72bff4b25bce62b4a69f504b852f52ac295d5a2,PodSandboxId:b63f4ee24680ba0e7d053ef95e1e566e12af314391f351f41255197696918189,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716213394270367335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,},Annotations:map[string]string{io.kubernetes.
container.hash: 7423fb5c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724a0f328d82967c2d2ac56e446c78255da4665ff63ffeb7b0c53a553f1a5eea,PodSandboxId:67e21022c7a3415dba1d15b31d0c206eda7eedf04c4524d232a063fa381e188a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716213394268329983,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,},Annotations:map[string]string{io.kubernetes.container.hash:
c125c438,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08459b873db5fd2313db35ebf67b4d487eac4df351de620a38af8b2d730ee07c,PodSandboxId:7a27d81d93d24acfcb1ef9947ee060f6f93aee219bc5565ffc273c247a279099,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716213394199325606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be3ef95e-a15a-4eb0-8eb0-6b0d5c57a5fd name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.586029989Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49c37b49-c165-4c06-8662-036ca3cfae11 name=/runtime.v1.RuntimeService/Version
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.586103266Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49c37b49-c165-4c06-8662-036ca3cfae11 name=/runtime.v1.RuntimeService/Version
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.587236264Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=971e623a-61c8-4fef-aeab-d40e2ca18f74 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.587884447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716213845587857337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=971e623a-61c8-4fef-aeab-d40e2ca18f74 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.588396241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=539e7796-882e-430c-98fc-3d9636665c7a name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.588449868Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=539e7796-882e-430c-98fc-3d9636665c7a name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.588973601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b5aa659743d06e72e553df5fb166941a4a09164354a9aba9ebd6df158b0a2c0,PodSandboxId:f77cab11bbedc326759c97360323edea68d34cbe5ac03da2ec3c5c1f43feafc4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716213800395288528,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,},Annotations:map[string]string{io.kubernetes.container.hash: 106b706f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df75c1bbda42f260827efa8676d14fa9eddbe9b2816ce1febe4c9dd46115eb19,PodSandboxId:e11526823350804b06706aa9ef92e5212c4b01bfd21f0fee70b1424c6057fc0a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716213766873056977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 5212307c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10846343b933740c1503c452acbbf4b4d1d8fc8079f65473c0596dcba48576f7,PodSandboxId:2c7c485c8e9047928b532493574043ec610fbc5b67a00c0f500d5b8a7b9195aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716213766819036127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,},Annotations:map[string]string{io.kubernetes.container.hash: 1473217a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:222a378af63cfd0b3dd2634d41788c29fe8273a65a7a6c03ad3324256eda986f,PodSandboxId:e877889b86a8f9ff3e23838b21889a311bf9e8fe40aa6b7d8d325fae4449ea40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716213766724036277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed-3115f4ae124d,},Annotations:map[string]
string{io.kubernetes.container.hash: 10056e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8aa664fbcdbdad4bcd080fd54d7101b141b67f9fb50c0a61a0bd40d1cb24878,PodSandboxId:52ff335575c31b24def77852dcaed5194fa475f155e6d03b761c5604fe76ca0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716213766734459135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{io.ku
bernetes.container.hash: d1a502da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552538ae34f5ba25a3d81e16bb92d73f926891c616aba284f99dc8383269059d,PodSandboxId:e8eb7950cef0cb0f3b440962fa55845ffe4199d9e308f13b03b77d8b892baab0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716213762837851891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47a632823b64cd2a6a6a5667d9594e7369a7385699677aed10c88fcce49c6a1,PodSandboxId:1519ac6f1d6decbe68b1ba789182ffdcf51589bb058b0f4ad596b894c06ce5b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716213762839873064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,},Annotations:map[string]string{io.kub
ernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ba8ed72509ec2cf2738715a3ae02d183f652a0fdf046452899b1ca36c8d1519,PodSandboxId:995ff54780bf97e426d8e6a022d196bdf501017a8d001fbce488f37e4d44fe88,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716213762787837814,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,},Annotations:map[string]string{io.kubernetes.container.hash: 7423fb5c,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0950bfbde431e07391bb72043d6a6f3e1141698eea80fe52436c452f26a03cd,PodSandboxId:6be4e58e0ac0ab92c7c2aa004e6cb1a62a6ba003171818845b731b483ff3db95,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716213762748483252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,},Annotations:map[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a65b004f5928e7c57c08a060d3b21ebfa7b51556b043dfb32d65be7c9f2ad0b,PodSandboxId:4b8717ffa777a554895361c3d98768f45d847a5f60a70427644ebadb7ad8e183,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716213462731382845,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,},Annotations:map[string]string{io.kubernetes.container.hash: 106b706f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ff5148b0c6ef582137978525a24e4eb722c2f50fe7c0bce9bc4577c82911ed7,PodSandboxId:0a83d9634f503160df44dbfec482c89bc010bc1fba09e4f243dbb0734139ff65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716213416445134004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,},Annotations:map[string]string{io.kubernetes.container.hash: 1473217a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d4b37910552f514029a560da97b4d30702e290efd8222ae7d93f001e8fc6cd,PodSandboxId:b5fc4aba67a0c2a17c5fb357f0dea7c3d48c2256f81ef3528bc056471785bdb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716213416357973958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{io.kubernetes.container.hash: d1a502da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40573632694f53397011e5740b786034914eacd4130a49af46d4dd3552733834,PodSandboxId:7cab01574fea9f8306f5099eca8737a6b7511280f1111c11f4e58c7159f6c7d0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716213414025508308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 5212307c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402715f11c169f573b8dbc6e0a4941404dcb6550b6dcfd79e497161a808b6c3a,PodSandboxId:a10d72941c5420edc8219ac630212eaf4276d7815ff265ce0015268ace796849,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716213413772333796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed
-3115f4ae124d,},Annotations:map[string]string{io.kubernetes.container.hash: 10056e2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b880898a006546d6c8d22ee52fe792bbba8137e189a7b8d2ed95c35965a308be,PodSandboxId:399f171341d410bf6c0e8abc16be9a017b0059f237940969ee1ebf4782d5c399,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716213394300306427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,}
,Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68b22a0039a12e8d4779947ae72bff4b25bce62b4a69f504b852f52ac295d5a2,PodSandboxId:b63f4ee24680ba0e7d053ef95e1e566e12af314391f351f41255197696918189,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716213394270367335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,},Annotations:map[string]string{io.kubernetes.
container.hash: 7423fb5c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724a0f328d82967c2d2ac56e446c78255da4665ff63ffeb7b0c53a553f1a5eea,PodSandboxId:67e21022c7a3415dba1d15b31d0c206eda7eedf04c4524d232a063fa381e188a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716213394268329983,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,},Annotations:map[string]string{io.kubernetes.container.hash:
c125c438,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08459b873db5fd2313db35ebf67b4d487eac4df351de620a38af8b2d730ee07c,PodSandboxId:7a27d81d93d24acfcb1ef9947ee060f6f93aee219bc5565ffc273c247a279099,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716213394199325606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=539e7796-882e-430c-98fc-3d9636665c7a name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.631242554Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=463d58fd-2164-4205-b120-3f56a5a5a878 name=/runtime.v1.RuntimeService/Version
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.631354625Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=463d58fd-2164-4205-b120-3f56a5a5a878 name=/runtime.v1.RuntimeService/Version
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.632572476Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f4e20f6-69a1-4aba-bd18-2e1f8ca38b05 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.633123895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716213845633094667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f4e20f6-69a1-4aba-bd18-2e1f8ca38b05 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.633835824Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2885f39-c01a-433b-942a-f9f211a03741 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.633917778Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2885f39-c01a-433b-942a-f9f211a03741 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.634467157Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b5aa659743d06e72e553df5fb166941a4a09164354a9aba9ebd6df158b0a2c0,PodSandboxId:f77cab11bbedc326759c97360323edea68d34cbe5ac03da2ec3c5c1f43feafc4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716213800395288528,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,},Annotations:map[string]string{io.kubernetes.container.hash: 106b706f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df75c1bbda42f260827efa8676d14fa9eddbe9b2816ce1febe4c9dd46115eb19,PodSandboxId:e11526823350804b06706aa9ef92e5212c4b01bfd21f0fee70b1424c6057fc0a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716213766873056977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 5212307c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10846343b933740c1503c452acbbf4b4d1d8fc8079f65473c0596dcba48576f7,PodSandboxId:2c7c485c8e9047928b532493574043ec610fbc5b67a00c0f500d5b8a7b9195aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716213766819036127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,},Annotations:map[string]string{io.kubernetes.container.hash: 1473217a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:222a378af63cfd0b3dd2634d41788c29fe8273a65a7a6c03ad3324256eda986f,PodSandboxId:e877889b86a8f9ff3e23838b21889a311bf9e8fe40aa6b7d8d325fae4449ea40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716213766724036277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed-3115f4ae124d,},Annotations:map[string]
string{io.kubernetes.container.hash: 10056e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8aa664fbcdbdad4bcd080fd54d7101b141b67f9fb50c0a61a0bd40d1cb24878,PodSandboxId:52ff335575c31b24def77852dcaed5194fa475f155e6d03b761c5604fe76ca0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716213766734459135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{io.ku
bernetes.container.hash: d1a502da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552538ae34f5ba25a3d81e16bb92d73f926891c616aba284f99dc8383269059d,PodSandboxId:e8eb7950cef0cb0f3b440962fa55845ffe4199d9e308f13b03b77d8b892baab0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716213762837851891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47a632823b64cd2a6a6a5667d9594e7369a7385699677aed10c88fcce49c6a1,PodSandboxId:1519ac6f1d6decbe68b1ba789182ffdcf51589bb058b0f4ad596b894c06ce5b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716213762839873064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,},Annotations:map[string]string{io.kub
ernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ba8ed72509ec2cf2738715a3ae02d183f652a0fdf046452899b1ca36c8d1519,PodSandboxId:995ff54780bf97e426d8e6a022d196bdf501017a8d001fbce488f37e4d44fe88,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716213762787837814,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,},Annotations:map[string]string{io.kubernetes.container.hash: 7423fb5c,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0950bfbde431e07391bb72043d6a6f3e1141698eea80fe52436c452f26a03cd,PodSandboxId:6be4e58e0ac0ab92c7c2aa004e6cb1a62a6ba003171818845b731b483ff3db95,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716213762748483252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,},Annotations:map[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a65b004f5928e7c57c08a060d3b21ebfa7b51556b043dfb32d65be7c9f2ad0b,PodSandboxId:4b8717ffa777a554895361c3d98768f45d847a5f60a70427644ebadb7ad8e183,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716213462731382845,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,},Annotations:map[string]string{io.kubernetes.container.hash: 106b706f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ff5148b0c6ef582137978525a24e4eb722c2f50fe7c0bce9bc4577c82911ed7,PodSandboxId:0a83d9634f503160df44dbfec482c89bc010bc1fba09e4f243dbb0734139ff65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716213416445134004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,},Annotations:map[string]string{io.kubernetes.container.hash: 1473217a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d4b37910552f514029a560da97b4d30702e290efd8222ae7d93f001e8fc6cd,PodSandboxId:b5fc4aba67a0c2a17c5fb357f0dea7c3d48c2256f81ef3528bc056471785bdb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716213416357973958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{io.kubernetes.container.hash: d1a502da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40573632694f53397011e5740b786034914eacd4130a49af46d4dd3552733834,PodSandboxId:7cab01574fea9f8306f5099eca8737a6b7511280f1111c11f4e58c7159f6c7d0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716213414025508308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 5212307c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402715f11c169f573b8dbc6e0a4941404dcb6550b6dcfd79e497161a808b6c3a,PodSandboxId:a10d72941c5420edc8219ac630212eaf4276d7815ff265ce0015268ace796849,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716213413772333796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed
-3115f4ae124d,},Annotations:map[string]string{io.kubernetes.container.hash: 10056e2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b880898a006546d6c8d22ee52fe792bbba8137e189a7b8d2ed95c35965a308be,PodSandboxId:399f171341d410bf6c0e8abc16be9a017b0059f237940969ee1ebf4782d5c399,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716213394300306427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,}
,Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68b22a0039a12e8d4779947ae72bff4b25bce62b4a69f504b852f52ac295d5a2,PodSandboxId:b63f4ee24680ba0e7d053ef95e1e566e12af314391f351f41255197696918189,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716213394270367335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,},Annotations:map[string]string{io.kubernetes.
container.hash: 7423fb5c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724a0f328d82967c2d2ac56e446c78255da4665ff63ffeb7b0c53a553f1a5eea,PodSandboxId:67e21022c7a3415dba1d15b31d0c206eda7eedf04c4524d232a063fa381e188a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716213394268329983,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,},Annotations:map[string]string{io.kubernetes.container.hash:
c125c438,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08459b873db5fd2313db35ebf67b4d487eac4df351de620a38af8b2d730ee07c,PodSandboxId:7a27d81d93d24acfcb1ef9947ee060f6f93aee219bc5565ffc273c247a279099,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716213394199325606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2885f39-c01a-433b-942a-f9f211a03741 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.676188625Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f01685ed-99ac-47d1-81d6-0ca216792bae name=/runtime.v1.RuntimeService/Version
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.676265030Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f01685ed-99ac-47d1-81d6-0ca216792bae name=/runtime.v1.RuntimeService/Version
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.677341075Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ee6a279-7fc7-4f84-90a1-385d90276f25 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.677939939Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716213845677910695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ee6a279-7fc7-4f84-90a1-385d90276f25 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.678392389Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79e54303-6679-4274-b168-b2576b0eb422 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.678463045Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79e54303-6679-4274-b168-b2576b0eb422 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:04:05 multinode-114485 crio[2846]: time="2024-05-20 14:04:05.678893757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b5aa659743d06e72e553df5fb166941a4a09164354a9aba9ebd6df158b0a2c0,PodSandboxId:f77cab11bbedc326759c97360323edea68d34cbe5ac03da2ec3c5c1f43feafc4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716213800395288528,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,},Annotations:map[string]string{io.kubernetes.container.hash: 106b706f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df75c1bbda42f260827efa8676d14fa9eddbe9b2816ce1febe4c9dd46115eb19,PodSandboxId:e11526823350804b06706aa9ef92e5212c4b01bfd21f0fee70b1424c6057fc0a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716213766873056977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 5212307c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10846343b933740c1503c452acbbf4b4d1d8fc8079f65473c0596dcba48576f7,PodSandboxId:2c7c485c8e9047928b532493574043ec610fbc5b67a00c0f500d5b8a7b9195aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716213766819036127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,},Annotations:map[string]string{io.kubernetes.container.hash: 1473217a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:222a378af63cfd0b3dd2634d41788c29fe8273a65a7a6c03ad3324256eda986f,PodSandboxId:e877889b86a8f9ff3e23838b21889a311bf9e8fe40aa6b7d8d325fae4449ea40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716213766724036277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed-3115f4ae124d,},Annotations:map[string]
string{io.kubernetes.container.hash: 10056e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8aa664fbcdbdad4bcd080fd54d7101b141b67f9fb50c0a61a0bd40d1cb24878,PodSandboxId:52ff335575c31b24def77852dcaed5194fa475f155e6d03b761c5604fe76ca0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716213766734459135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{io.ku
bernetes.container.hash: d1a502da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552538ae34f5ba25a3d81e16bb92d73f926891c616aba284f99dc8383269059d,PodSandboxId:e8eb7950cef0cb0f3b440962fa55845ffe4199d9e308f13b03b77d8b892baab0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716213762837851891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47a632823b64cd2a6a6a5667d9594e7369a7385699677aed10c88fcce49c6a1,PodSandboxId:1519ac6f1d6decbe68b1ba789182ffdcf51589bb058b0f4ad596b894c06ce5b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716213762839873064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,},Annotations:map[string]string{io.kub
ernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ba8ed72509ec2cf2738715a3ae02d183f652a0fdf046452899b1ca36c8d1519,PodSandboxId:995ff54780bf97e426d8e6a022d196bdf501017a8d001fbce488f37e4d44fe88,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716213762787837814,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,},Annotations:map[string]string{io.kubernetes.container.hash: 7423fb5c,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0950bfbde431e07391bb72043d6a6f3e1141698eea80fe52436c452f26a03cd,PodSandboxId:6be4e58e0ac0ab92c7c2aa004e6cb1a62a6ba003171818845b731b483ff3db95,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716213762748483252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,},Annotations:map[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a65b004f5928e7c57c08a060d3b21ebfa7b51556b043dfb32d65be7c9f2ad0b,PodSandboxId:4b8717ffa777a554895361c3d98768f45d847a5f60a70427644ebadb7ad8e183,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716213462731382845,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,},Annotations:map[string]string{io.kubernetes.container.hash: 106b706f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ff5148b0c6ef582137978525a24e4eb722c2f50fe7c0bce9bc4577c82911ed7,PodSandboxId:0a83d9634f503160df44dbfec482c89bc010bc1fba09e4f243dbb0734139ff65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716213416445134004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,},Annotations:map[string]string{io.kubernetes.container.hash: 1473217a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d4b37910552f514029a560da97b4d30702e290efd8222ae7d93f001e8fc6cd,PodSandboxId:b5fc4aba67a0c2a17c5fb357f0dea7c3d48c2256f81ef3528bc056471785bdb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716213416357973958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{io.kubernetes.container.hash: d1a502da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40573632694f53397011e5740b786034914eacd4130a49af46d4dd3552733834,PodSandboxId:7cab01574fea9f8306f5099eca8737a6b7511280f1111c11f4e58c7159f6c7d0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716213414025508308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 5212307c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402715f11c169f573b8dbc6e0a4941404dcb6550b6dcfd79e497161a808b6c3a,PodSandboxId:a10d72941c5420edc8219ac630212eaf4276d7815ff265ce0015268ace796849,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716213413772333796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed
-3115f4ae124d,},Annotations:map[string]string{io.kubernetes.container.hash: 10056e2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b880898a006546d6c8d22ee52fe792bbba8137e189a7b8d2ed95c35965a308be,PodSandboxId:399f171341d410bf6c0e8abc16be9a017b0059f237940969ee1ebf4782d5c399,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716213394300306427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,}
,Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68b22a0039a12e8d4779947ae72bff4b25bce62b4a69f504b852f52ac295d5a2,PodSandboxId:b63f4ee24680ba0e7d053ef95e1e566e12af314391f351f41255197696918189,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716213394270367335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,},Annotations:map[string]string{io.kubernetes.
container.hash: 7423fb5c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724a0f328d82967c2d2ac56e446c78255da4665ff63ffeb7b0c53a553f1a5eea,PodSandboxId:67e21022c7a3415dba1d15b31d0c206eda7eedf04c4524d232a063fa381e188a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716213394268329983,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,},Annotations:map[string]string{io.kubernetes.container.hash:
c125c438,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08459b873db5fd2313db35ebf67b4d487eac4df351de620a38af8b2d730ee07c,PodSandboxId:7a27d81d93d24acfcb1ef9947ee060f6f93aee219bc5565ffc273c247a279099,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716213394199325606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79e54303-6679-4274-b168-b2576b0eb422 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5b5aa659743d0       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      45 seconds ago       Running             busybox                   1                   f77cab11bbedc       busybox-fc5497c4f-w8gjh
	df75c1bbda42f       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   e115268233508       kindnet-cthl4
	10846343b9337       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   2c7c485c8e904       coredns-7db6d8ff4d-2vnnq
	a8aa664fbcdbd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   52ff335575c31       storage-provisioner
	222a378af63cf       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      About a minute ago   Running             kube-proxy                1                   e877889b86a8f       kube-proxy-c5jv4
	b47a632823b64       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      About a minute ago   Running             kube-controller-manager   1                   1519ac6f1d6de       kube-controller-manager-multinode-114485
	552538ae34f5b       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      About a minute ago   Running             kube-scheduler            1                   e8eb7950cef0c       kube-scheduler-multinode-114485
	3ba8ed72509ec       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   995ff54780bf9       etcd-multinode-114485
	a0950bfbde431       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Running             kube-apiserver            1                   6be4e58e0ac0a       kube-apiserver-multinode-114485
	7a65b004f5928       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   4b8717ffa777a       busybox-fc5497c4f-w8gjh
	1ff5148b0c6ef       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   0a83d9634f503       coredns-7db6d8ff4d-2vnnq
	a6d4b37910552       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   b5fc4aba67a0c       storage-provisioner
	40573632694f5       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   7cab01574fea9       kindnet-cthl4
	402715f11c169       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      7 minutes ago        Exited              kube-proxy                0                   a10d72941c542       kube-proxy-c5jv4
	b880898a00654       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago        Exited              kube-scheduler            0                   399f171341d41       kube-scheduler-multinode-114485
	68b22a0039a12       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   b63f4ee24680b       etcd-multinode-114485
	724a0f328d829       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago        Exited              kube-apiserver            0                   67e21022c7a34       kube-apiserver-multinode-114485
	08459b873db5f       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago        Exited              kube-controller-manager   0                   7a27d81d93d24       kube-controller-manager-multinode-114485
	
	
	==> coredns [10846343b933740c1503c452acbbf4b4d1d8fc8079f65473c0596dcba48576f7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53624 - 32218 "HINFO IN 304773559447969083.7830795655233602673. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00893932s
	
	
	==> coredns [1ff5148b0c6ef582137978525a24e4eb722c2f50fe7c0bce9bc4577c82911ed7] <==
	[INFO] 10.244.0.3:39109 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001770291s
	[INFO] 10.244.0.3:58040 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104751s
	[INFO] 10.244.0.3:40242 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142644s
	[INFO] 10.244.0.3:59131 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001361565s
	[INFO] 10.244.0.3:40719 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094422s
	[INFO] 10.244.0.3:56913 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112225s
	[INFO] 10.244.0.3:49638 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.001034326s
	[INFO] 10.244.1.2:50916 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000293699s
	[INFO] 10.244.1.2:37156 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000203471s
	[INFO] 10.244.1.2:42754 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171979s
	[INFO] 10.244.1.2:35018 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083128s
	[INFO] 10.244.0.3:43392 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194206s
	[INFO] 10.244.0.3:42750 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075694s
	[INFO] 10.244.0.3:36250 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200931s
	[INFO] 10.244.0.3:53362 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154831s
	[INFO] 10.244.1.2:55106 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106323s
	[INFO] 10.244.1.2:45726 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000357128s
	[INFO] 10.244.1.2:34646 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114132s
	[INFO] 10.244.1.2:38586 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000200709s
	[INFO] 10.244.0.3:45806 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083628s
	[INFO] 10.244.0.3:43445 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000038864s
	[INFO] 10.244.0.3:50558 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00003753s
	[INFO] 10.244.0.3:47130 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000035206s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-114485
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-114485
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=multinode-114485
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T13_56_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:56:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-114485
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 14:03:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 14:02:45 +0000   Mon, 20 May 2024 13:56:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 14:02:45 +0000   Mon, 20 May 2024 13:56:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 14:02:45 +0000   Mon, 20 May 2024 13:56:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 14:02:45 +0000   Mon, 20 May 2024 13:56:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.141
	  Hostname:    multinode-114485
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7eaa9386e4541a9b98eb4fedef56182
	  System UUID:                f7eaa938-6e45-41a9-b98e-b4fedef56182
	  Boot ID:                    da877314-8b45-4837-8c5b-bf338c249bde
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-w8gjh                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 coredns-7db6d8ff4d-2vnnq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m12s
	  kube-system                 etcd-multinode-114485                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m26s
	  kube-system                 kindnet-cthl4                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m12s
	  kube-system                 kube-apiserver-multinode-114485             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	  kube-system                 kube-controller-manager-multinode-114485    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	  kube-system                 kube-proxy-c5jv4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 kube-scheduler-multinode-114485             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m11s              kube-proxy       
	  Normal  Starting                 78s                kube-proxy       
	  Normal  NodeHasSufficientPID     7m26s              kubelet          Node multinode-114485 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m26s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m26s              kubelet          Node multinode-114485 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m26s              kubelet          Node multinode-114485 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m26s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m13s              node-controller  Node multinode-114485 event: Registered Node multinode-114485 in Controller
	  Normal  NodeReady                7m10s              kubelet          Node multinode-114485 status is now: NodeReady
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  83s (x8 over 83s)  kubelet          Node multinode-114485 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x8 over 83s)  kubelet          Node multinode-114485 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node multinode-114485 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           67s                node-controller  Node multinode-114485 event: Registered Node multinode-114485 in Controller
	
	
	Name:               multinode-114485-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-114485-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=multinode-114485
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T14_03_25_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 14:03:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-114485-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 14:04:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 14:03:55 +0000   Mon, 20 May 2024 14:03:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 14:03:55 +0000   Mon, 20 May 2024 14:03:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 14:03:55 +0000   Mon, 20 May 2024 14:03:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 14:03:55 +0000   Mon, 20 May 2024 14:03:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    multinode-114485-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a746118cadd34140a015ce06209237c3
	  System UUID:                a746118c-add3-4140-a015-ce06209237c3
	  Boot ID:                    fc6c1200-4ab5-49fe-a2de-7f6362203f15
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bcfmm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 kindnet-xcxtk              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m39s
	  kube-system                 kube-proxy-6w2qv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m33s                  kube-proxy  
	  Normal  Starting                 36s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m39s (x2 over 6m40s)  kubelet     Node multinode-114485-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m39s (x2 over 6m40s)  kubelet     Node multinode-114485-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m39s (x2 over 6m40s)  kubelet     Node multinode-114485-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m39s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m29s                  kubelet     Node multinode-114485-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  42s (x2 over 42s)      kubelet     Node multinode-114485-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x2 over 42s)      kubelet     Node multinode-114485-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x2 over 42s)      kubelet     Node multinode-114485-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  42s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                32s                    kubelet     Node multinode-114485-m02 status is now: NodeReady
	
	
	Name:               multinode-114485-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-114485-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=multinode-114485
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T14_03_54_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 14:03:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-114485-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 14:04:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 14:04:02 +0000   Mon, 20 May 2024 14:03:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 14:04:02 +0000   Mon, 20 May 2024 14:03:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 14:04:02 +0000   Mon, 20 May 2024 14:03:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 14:04:02 +0000   Mon, 20 May 2024 14:04:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.130
	  Hostname:    multinode-114485-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 424dd38b79a74697bd0adb566116bc77
	  System UUID:                424dd38b-79a7-4697-bd0a-db566116bc77
	  Boot ID:                    eb322a9a-717a-4d1a-8a3a-4db46ca86ce0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8hz6f       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m55s
	  kube-system                 kube-proxy-6fkdn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m49s                  kube-proxy  
	  Normal  Starting                 7s                     kube-proxy  
	  Normal  Starting                 5m9s                   kube-proxy  
	  Normal  NodeHasSufficientMemory  5m55s (x3 over 5m55s)  kubelet     Node multinode-114485-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s (x3 over 5m55s)  kubelet     Node multinode-114485-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s (x3 over 5m55s)  kubelet     Node multinode-114485-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m55s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m45s                  kubelet     Node multinode-114485-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m15s (x2 over 5m15s)  kubelet     Node multinode-114485-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m15s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m15s (x2 over 5m15s)  kubelet     Node multinode-114485-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m15s (x2 over 5m15s)  kubelet     Node multinode-114485-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m6s                   kubelet     Node multinode-114485-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)      kubelet     Node multinode-114485-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)      kubelet     Node multinode-114485-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)      kubelet     Node multinode-114485-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-114485-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.329412] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.059863] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061281] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.174792] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.138775] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.266440] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +3.964781] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.090345] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.061845] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.978278] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.074685] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.824866] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.360732] systemd-fstab-generator[1471]: Ignoring "noauto" option for root device
	[May20 13:57] kauditd_printk_skb: 84 callbacks suppressed
	[May20 14:02] systemd-fstab-generator[2759]: Ignoring "noauto" option for root device
	[  +0.137355] systemd-fstab-generator[2772]: Ignoring "noauto" option for root device
	[  +0.178565] systemd-fstab-generator[2786]: Ignoring "noauto" option for root device
	[  +0.140879] systemd-fstab-generator[2798]: Ignoring "noauto" option for root device
	[  +0.268383] systemd-fstab-generator[2826]: Ignoring "noauto" option for root device
	[  +1.969370] systemd-fstab-generator[2930]: Ignoring "noauto" option for root device
	[  +1.830679] systemd-fstab-generator[3052]: Ignoring "noauto" option for root device
	[  +0.734564] kauditd_printk_skb: 144 callbacks suppressed
	[ +16.143280] kauditd_printk_skb: 72 callbacks suppressed
	[May20 14:03] systemd-fstab-generator[3889]: Ignoring "noauto" option for root device
	[ +20.025623] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [3ba8ed72509ec2cf2738715a3ae02d183f652a0fdf046452899b1ca36c8d1519] <==
	{"level":"info","ts":"2024-05-20T14:02:43.207245Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T14:02:43.20732Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T14:02:43.208354Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T14:02:43.211541Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2398e045949c73cb","initial-advertise-peer-urls":["https://192.168.39.141:2380"],"listen-peer-urls":["https://192.168.39.141:2380"],"advertise-client-urls":["https://192.168.39.141:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.141:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T14:02:43.212574Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T14:02:43.211201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb switched to configuration voters=(2565046577238143947)"}
	{"level":"info","ts":"2024-05-20T14:02:43.213509Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bf8381628c3e4cea","local-member-id":"2398e045949c73cb","added-peer-id":"2398e045949c73cb","added-peer-peer-urls":["https://192.168.39.141:2380"]}
	{"level":"info","ts":"2024-05-20T14:02:43.211332Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2024-05-20T14:02:43.216093Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2024-05-20T14:02:43.216326Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bf8381628c3e4cea","local-member-id":"2398e045949c73cb","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T14:02:43.216607Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T14:02:44.259142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T14:02:44.259194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T14:02:44.259224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb received MsgPreVoteResp from 2398e045949c73cb at term 2"}
	{"level":"info","ts":"2024-05-20T14:02:44.259235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T14:02:44.259241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb received MsgVoteResp from 2398e045949c73cb at term 3"}
	{"level":"info","ts":"2024-05-20T14:02:44.259256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became leader at term 3"}
	{"level":"info","ts":"2024-05-20T14:02:44.259279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2398e045949c73cb elected leader 2398e045949c73cb at term 3"}
	{"level":"info","ts":"2024-05-20T14:02:44.26659Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2398e045949c73cb","local-member-attributes":"{Name:multinode-114485 ClientURLs:[https://192.168.39.141:2379]}","request-path":"/0/members/2398e045949c73cb/attributes","cluster-id":"bf8381628c3e4cea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T14:02:44.266857Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T14:02:44.267899Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T14:02:44.26889Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T14:02:44.268923Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T14:02:44.268924Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T14:02:44.272596Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.141:2379"}
	
	
	==> etcd [68b22a0039a12e8d4779947ae72bff4b25bce62b4a69f504b852f52ac295d5a2] <==
	{"level":"info","ts":"2024-05-20T13:57:27.222093Z","caller":"traceutil/trace.go:171","msg":"trace[486503619] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"263.948158ms","start":"2024-05-20T13:57:26.958128Z","end":"2024-05-20T13:57:27.222076Z","steps":["trace[486503619] 'process raft request'  (duration: 66.845703ms)","trace[486503619] 'compare'  (duration: 195.957062ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T13:57:27.222242Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.615888ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T13:57:27.222288Z","caller":"traceutil/trace.go:171","msg":"trace[46184745] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:453; }","duration":"169.669219ms","start":"2024-05-20T13:57:27.052607Z","end":"2024-05-20T13:57:27.222276Z","steps":["trace[46184745] 'agreement among raft nodes before linearized reading'  (duration: 169.582858ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T13:58:11.290597Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.104934ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8343920610286423031 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-bcfrb\" mod_revision:576 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-bcfrb\" value_size:1268 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-bcfrb\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-05-20T13:58:11.290966Z","caller":"traceutil/trace.go:171","msg":"trace[1657015360] linearizableReadLoop","detail":"{readStateIndex:613; appliedIndex:612; }","duration":"236.862655ms","start":"2024-05-20T13:58:11.054073Z","end":"2024-05-20T13:58:11.290936Z","steps":["trace[1657015360] 'read index received'  (duration: 97.002572ms)","trace[1657015360] 'applied index is now lower than readState.Index'  (duration: 139.85839ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T13:58:11.291378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.203364ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T13:58:11.291455Z","caller":"traceutil/trace.go:171","msg":"trace[1003733973] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:578; }","duration":"237.394051ms","start":"2024-05-20T13:58:11.05405Z","end":"2024-05-20T13:58:11.291444Z","steps":["trace[1003733973] 'agreement among raft nodes before linearized reading'  (duration: 237.086596ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T13:58:11.291553Z","caller":"traceutil/trace.go:171","msg":"trace[857176718] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"252.499548ms","start":"2024-05-20T13:58:11.039033Z","end":"2024-05-20T13:58:11.291532Z","steps":["trace[857176718] 'process raft request'  (duration: 112.08375ms)","trace[857176718] 'compare'  (duration: 138.96424ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T13:58:18.564587Z","caller":"traceutil/trace.go:171","msg":"trace[197123083] transaction","detail":"{read_only:false; response_revision:619; number_of_response:1; }","duration":"121.737931ms","start":"2024-05-20T13:58:18.442815Z","end":"2024-05-20T13:58:18.564553Z","steps":["trace[197123083] 'process raft request'  (duration: 121.477447ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T13:58:19.016428Z","caller":"traceutil/trace.go:171","msg":"trace[998706925] linearizableReadLoop","detail":"{readStateIndex:659; appliedIndex:658; }","duration":"105.368948ms","start":"2024-05-20T13:58:18.911042Z","end":"2024-05-20T13:58:19.016411Z","steps":["trace[998706925] 'read index received'  (duration: 24.60621ms)","trace[998706925] 'applied index is now lower than readState.Index'  (duration: 80.762083ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T13:58:19.016585Z","caller":"traceutil/trace.go:171","msg":"trace[336241855] transaction","detail":"{read_only:false; response_revision:620; number_of_response:1; }","duration":"145.409896ms","start":"2024-05-20T13:58:18.871168Z","end":"2024-05-20T13:58:19.016578Z","steps":["trace[336241855] 'process raft request'  (duration: 64.522791ms)","trace[336241855] 'compare'  (duration: 80.641083ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T13:58:19.016449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.741691ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.141\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-05-20T13:58:19.016671Z","caller":"traceutil/trace.go:171","msg":"trace[205568241] range","detail":"{range_begin:/registry/masterleases/192.168.39.141; range_end:; response_count:1; response_revision:619; }","duration":"255.006766ms","start":"2024-05-20T13:58:18.761652Z","end":"2024-05-20T13:58:19.016659Z","steps":["trace[205568241] 'range keys from in-memory index tree'  (duration: 254.551981ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T13:58:19.017298Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.244166ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T13:58:19.017995Z","caller":"traceutil/trace.go:171","msg":"trace[676301740] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:620; }","duration":"106.94281ms","start":"2024-05-20T13:58:18.911018Z","end":"2024-05-20T13:58:19.017961Z","steps":["trace[676301740] 'agreement among raft nodes before linearized reading'  (duration: 106.252638ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T14:01:06.15863Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-20T14:01:06.158743Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-114485","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.141:2380"],"advertise-client-urls":["https://192.168.39.141:2379"]}
	{"level":"warn","ts":"2024-05-20T14:01:06.158887Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T14:01:06.158989Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T14:01:06.212994Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.141:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T14:01:06.213083Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.141:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-20T14:01:06.213155Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2398e045949c73cb","current-leader-member-id":"2398e045949c73cb"}
	{"level":"info","ts":"2024-05-20T14:01:06.21935Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2024-05-20T14:01:06.219461Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2024-05-20T14:01:06.219482Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-114485","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.141:2380"],"advertise-client-urls":["https://192.168.39.141:2379"]}
	
	
	==> kernel <==
	 14:04:06 up 7 min,  0 users,  load average: 0.25, 0.35, 0.22
	Linux multinode-114485 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [40573632694f53397011e5740b786034914eacd4130a49af46d4dd3552733834] <==
	I0520 14:00:25.817562       1 main.go:250] Node multinode-114485-m03 has CIDR [10.244.3.0/24] 
	I0520 14:00:35.828869       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:00:35.828962       1 main.go:227] handling current node
	I0520 14:00:35.828987       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:00:35.829005       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:00:35.829139       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I0520 14:00:35.829177       1 main.go:250] Node multinode-114485-m03 has CIDR [10.244.3.0/24] 
	I0520 14:00:45.836237       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:00:45.838748       1 main.go:227] handling current node
	I0520 14:00:45.838871       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:00:45.838909       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:00:45.839100       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I0520 14:00:45.839151       1 main.go:250] Node multinode-114485-m03 has CIDR [10.244.3.0/24] 
	I0520 14:00:55.844461       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:00:55.844506       1 main.go:227] handling current node
	I0520 14:00:55.844535       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:00:55.844543       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:00:55.844713       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I0520 14:00:55.844738       1 main.go:250] Node multinode-114485-m03 has CIDR [10.244.3.0/24] 
	I0520 14:01:05.856089       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:01:05.856120       1 main.go:227] handling current node
	I0520 14:01:05.856130       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:01:05.856136       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:01:05.856233       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I0520 14:01:05.856251       1 main.go:250] Node multinode-114485-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [df75c1bbda42f260827efa8676d14fa9eddbe9b2816ce1febe4c9dd46115eb19] <==
	I0520 14:03:17.632608       1 main.go:250] Node multinode-114485-m03 has CIDR [10.244.3.0/24] 
	I0520 14:03:27.647261       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:03:27.647318       1 main.go:227] handling current node
	I0520 14:03:27.647339       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:03:27.647347       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:03:27.647470       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I0520 14:03:27.647498       1 main.go:250] Node multinode-114485-m03 has CIDR [10.244.3.0/24] 
	I0520 14:03:37.652031       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:03:37.652064       1 main.go:227] handling current node
	I0520 14:03:37.652074       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:03:37.652080       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:03:37.652200       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I0520 14:03:37.652217       1 main.go:250] Node multinode-114485-m03 has CIDR [10.244.3.0/24] 
	I0520 14:03:47.695707       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:03:47.695762       1 main.go:227] handling current node
	I0520 14:03:47.695815       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:03:47.695825       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:03:47.695938       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I0520 14:03:47.695954       1 main.go:250] Node multinode-114485-m03 has CIDR [10.244.3.0/24] 
	I0520 14:03:57.710561       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:03:57.710599       1 main.go:227] handling current node
	I0520 14:03:57.710609       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:03:57.710616       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:03:57.710857       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I0520 14:03:57.710879       1 main.go:250] Node multinode-114485-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [724a0f328d82967c2d2ac56e446c78255da4665ff63ffeb7b0c53a553f1a5eea] <==
	W0520 14:01:06.162680       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.162699       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.171405       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.172643       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.172727       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.172961       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173141       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173228       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173278       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173336       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173361       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173409       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173456       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173499       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173526       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173580       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173632       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173678       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173729       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173815       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173863       1 logging.go:59] [core] [Channel #9 SubChannel #10] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173909       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173340       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173149       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173504       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a0950bfbde431e07391bb72043d6a6f3e1141698eea80fe52436c452f26a03cd] <==
	I0520 14:02:45.643388       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 14:02:45.655644       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 14:02:45.671973       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 14:02:45.674898       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 14:02:45.674984       1 policy_source.go:224] refreshing policies
	I0520 14:02:45.699272       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 14:02:45.699750       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 14:02:45.700259       1 aggregator.go:165] initial CRD sync complete...
	I0520 14:02:45.700304       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 14:02:45.700328       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 14:02:45.700352       1 cache.go:39] Caches are synced for autoregister controller
	I0520 14:02:45.700354       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0520 14:02:45.710174       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0520 14:02:45.710597       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 14:02:45.710616       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 14:02:45.710716       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 14:02:45.711634       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 14:02:46.505686       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 14:02:47.711340       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 14:02:47.832215       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 14:02:47.844629       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 14:02:47.957511       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 14:02:47.977144       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 14:02:58.787000       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 14:02:58.789334       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [08459b873db5fd2313db35ebf67b4d487eac4df351de620a38af8b2d730ee07c] <==
	I0520 13:57:27.227568       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-114485-m02\" does not exist"
	I0520 13:57:27.237994       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-114485-m02" podCIDRs=["10.244.1.0/24"]
	I0520 13:57:27.264084       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-114485-m02"
	I0520 13:57:37.630344       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 13:57:39.979675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.490314ms"
	I0520 13:57:39.994897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.140026ms"
	I0520 13:57:40.009546       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.597496ms"
	I0520 13:57:40.009628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.411µs"
	I0520 13:57:43.109439       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.344769ms"
	I0520 13:57:43.110975       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.886µs"
	I0520 13:57:43.796414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.233683ms"
	I0520 13:57:43.797324       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.637µs"
	I0520 13:58:11.344184       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-114485-m03\" does not exist"
	I0520 13:58:11.345452       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 13:58:11.359247       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-114485-m03" podCIDRs=["10.244.2.0/24"]
	I0520 13:58:12.284289       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-114485-m03"
	I0520 13:58:21.896612       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 13:58:50.149100       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 13:58:51.688969       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-114485-m03\" does not exist"
	I0520 13:58:51.689527       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 13:58:51.696376       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-114485-m03" podCIDRs=["10.244.3.0/24"]
	I0520 13:59:00.450899       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 13:59:42.341121       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 13:59:42.404865       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.745822ms"
	I0520 13:59:42.405022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.016µs"
	
	
	==> kube-controller-manager [b47a632823b64cd2a6a6a5667d9594e7369a7385699677aed10c88fcce49c6a1] <==
	I0520 14:02:59.486621       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 14:02:59.486714       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 14:03:20.664069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.537287ms"
	I0520 14:03:20.664270       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.119µs"
	I0520 14:03:20.678575       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.163617ms"
	I0520 14:03:20.679132       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.474µs"
	I0520 14:03:24.757964       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-114485-m02\" does not exist"
	I0520 14:03:24.767673       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-114485-m02" podCIDRs=["10.244.1.0/24"]
	I0520 14:03:26.654037       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.392µs"
	I0520 14:03:26.702382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.215µs"
	I0520 14:03:26.712952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.386µs"
	I0520 14:03:26.718268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.942µs"
	I0520 14:03:26.726893       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.896µs"
	I0520 14:03:26.731056       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.892µs"
	I0520 14:03:29.049822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.759µs"
	I0520 14:03:34.151038       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 14:03:34.176738       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.606µs"
	I0520 14:03:34.197898       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.093µs"
	I0520 14:03:37.164106       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.08319ms"
	I0520 14:03:37.164227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.17µs"
	I0520 14:03:52.430672       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 14:03:53.638451       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-114485-m03\" does not exist"
	I0520 14:03:53.638615       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 14:03:53.651846       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-114485-m03" podCIDRs=["10.244.2.0/24"]
	I0520 14:04:02.704103       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	
	
	==> kube-proxy [222a378af63cfd0b3dd2634d41788c29fe8273a65a7a6c03ad3324256eda986f] <==
	I0520 14:02:47.066219       1 server_linux.go:69] "Using iptables proxy"
	I0520 14:02:47.083590       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.141"]
	I0520 14:02:47.147688       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 14:02:47.147743       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 14:02:47.147764       1 server_linux.go:165] "Using iptables Proxier"
	I0520 14:02:47.150276       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 14:02:47.150457       1 server.go:872] "Version info" version="v1.30.1"
	I0520 14:02:47.150487       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 14:02:47.152031       1 config.go:192] "Starting service config controller"
	I0520 14:02:47.152067       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 14:02:47.152093       1 config.go:101] "Starting endpoint slice config controller"
	I0520 14:02:47.152108       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 14:02:47.152642       1 config.go:319] "Starting node config controller"
	I0520 14:02:47.152672       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 14:02:47.252739       1 shared_informer.go:320] Caches are synced for node config
	I0520 14:02:47.252766       1 shared_informer.go:320] Caches are synced for service config
	I0520 14:02:47.252844       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [402715f11c169f573b8dbc6e0a4941404dcb6550b6dcfd79e497161a808b6c3a] <==
	I0520 13:56:54.255059       1 server_linux.go:69] "Using iptables proxy"
	I0520 13:56:54.292715       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.141"]
	I0520 13:56:54.403699       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 13:56:54.403748       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 13:56:54.403818       1 server_linux.go:165] "Using iptables Proxier"
	I0520 13:56:54.406057       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 13:56:54.406269       1 server.go:872] "Version info" version="v1.30.1"
	I0520 13:56:54.406294       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:56:54.409602       1 config.go:192] "Starting service config controller"
	I0520 13:56:54.409632       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 13:56:54.409651       1 config.go:101] "Starting endpoint slice config controller"
	I0520 13:56:54.409655       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 13:56:54.410232       1 config.go:319] "Starting node config controller"
	I0520 13:56:54.410238       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 13:56:54.509763       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 13:56:54.509955       1 shared_informer.go:320] Caches are synced for service config
	I0520 13:56:54.510727       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [552538ae34f5ba25a3d81e16bb92d73f926891c616aba284f99dc8383269059d] <==
	I0520 14:02:43.763062       1 serving.go:380] Generated self-signed cert in-memory
	W0520 14:02:45.584678       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 14:02:45.584761       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 14:02:45.584820       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 14:02:45.584831       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 14:02:45.646589       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 14:02:45.646670       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 14:02:45.656614       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0520 14:02:45.656743       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 14:02:45.656808       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 14:02:45.656835       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 14:02:45.757816       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b880898a006546d6c8d22ee52fe792bbba8137e189a7b8d2ed95c35965a308be] <==
	W0520 13:56:37.976870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 13:56:37.976999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 13:56:38.007470       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 13:56:38.007860       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 13:56:38.026048       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 13:56:38.026175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 13:56:38.110873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 13:56:38.110975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 13:56:38.125253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 13:56:38.125341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 13:56:38.130266       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 13:56:38.130349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 13:56:38.139982       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 13:56:38.140085       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 13:56:38.219663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 13:56:38.219932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 13:56:38.318455       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 13:56:38.318579       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 13:56:38.345634       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 13:56:38.345663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0520 13:56:40.602184       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 14:01:06.157663       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0520 14:01:06.157834       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0520 14:01:06.158106       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0520 14:01:06.174409       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 20 14:02:42 multinode-114485 kubelet[3059]: I0520 14:02:42.856041    3059 kubelet_node_status.go:73] "Attempting to register node" node="multinode-114485"
	May 20 14:02:42 multinode-114485 kubelet[3059]: E0520 14:02:42.859030    3059 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.141:8443: connect: connection refused" node="multinode-114485"
	May 20 14:02:43 multinode-114485 kubelet[3059]: I0520 14:02:43.660394    3059 kubelet_node_status.go:73] "Attempting to register node" node="multinode-114485"
	May 20 14:02:45 multinode-114485 kubelet[3059]: I0520 14:02:45.775681    3059 kubelet_node_status.go:112] "Node was previously registered" node="multinode-114485"
	May 20 14:02:45 multinode-114485 kubelet[3059]: I0520 14:02:45.776109    3059 kubelet_node_status.go:76] "Successfully registered node" node="multinode-114485"
	May 20 14:02:45 multinode-114485 kubelet[3059]: I0520 14:02:45.778061    3059 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 20 14:02:45 multinode-114485 kubelet[3059]: I0520 14:02:45.779187    3059 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.122517    3059 apiserver.go:52] "Watching apiserver"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.125974    3059 topology_manager.go:215] "Topology Admit Handler" podUID="0102751c-4388-4e4d-80ed-3115f4ae124d" podNamespace="kube-system" podName="kube-proxy-c5jv4"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.127705    3059 topology_manager.go:215] "Topology Admit Handler" podUID="8e815096-de18-40b2-af12-e6cbc2faf393" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2vnnq"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.128560    3059 topology_manager.go:215] "Topology Admit Handler" podUID="bd51aead-83ce-49c7-a860-e88ae9e25ff1" podNamespace="kube-system" podName="kindnet-cthl4"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.128916    3059 topology_manager.go:215] "Topology Admit Handler" podUID="e16b9968-0b37-4750-bf40-91d6bcf8dd47" podNamespace="kube-system" podName="storage-provisioner"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.129020    3059 topology_manager.go:215] "Topology Admit Handler" podUID="a510c35e-ae74-4076-a8ae-12913bb167bc" podNamespace="default" podName="busybox-fc5497c4f-w8gjh"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.138393    3059 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.192730    3059 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd51aead-83ce-49c7-a860-e88ae9e25ff1-xtables-lock\") pod \"kindnet-cthl4\" (UID: \"bd51aead-83ce-49c7-a860-e88ae9e25ff1\") " pod="kube-system/kindnet-cthl4"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.192952    3059 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e16b9968-0b37-4750-bf40-91d6bcf8dd47-tmp\") pod \"storage-provisioner\" (UID: \"e16b9968-0b37-4750-bf40-91d6bcf8dd47\") " pod="kube-system/storage-provisioner"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.193033    3059 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0102751c-4388-4e4d-80ed-3115f4ae124d-xtables-lock\") pod \"kube-proxy-c5jv4\" (UID: \"0102751c-4388-4e4d-80ed-3115f4ae124d\") " pod="kube-system/kube-proxy-c5jv4"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.193100    3059 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd51aead-83ce-49c7-a860-e88ae9e25ff1-lib-modules\") pod \"kindnet-cthl4\" (UID: \"bd51aead-83ce-49c7-a860-e88ae9e25ff1\") " pod="kube-system/kindnet-cthl4"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.193194    3059 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bd51aead-83ce-49c7-a860-e88ae9e25ff1-cni-cfg\") pod \"kindnet-cthl4\" (UID: \"bd51aead-83ce-49c7-a860-e88ae9e25ff1\") " pod="kube-system/kindnet-cthl4"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.193260    3059 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0102751c-4388-4e4d-80ed-3115f4ae124d-lib-modules\") pod \"kube-proxy-c5jv4\" (UID: \"0102751c-4388-4e4d-80ed-3115f4ae124d\") " pod="kube-system/kube-proxy-c5jv4"
	May 20 14:03:42 multinode-114485 kubelet[3059]: E0520 14:03:42.173236    3059 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 14:03:42 multinode-114485 kubelet[3059]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 14:03:42 multinode-114485 kubelet[3059]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 14:03:42 multinode-114485 kubelet[3059]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 14:03:42 multinode-114485 kubelet[3059]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 14:04:05.262820  643128 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18929-602525/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-114485 -n multinode-114485
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-114485 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (304.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 stop
E0520 14:05:02.806210  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-114485 stop: exit status 82 (2m0.493887893s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-114485-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-114485 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-114485 status: exit status 3 (18.872384802s)

                                                
                                                
-- stdout --
	multinode-114485
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-114485-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 14:06:29.037635  643796 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	E0520 14:06:29.037679  643796 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-114485 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-114485 -n multinode-114485
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-114485 logs -n 25: (1.478028171s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-114485 cp multinode-114485-m02:/home/docker/cp-test.txt                       | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485:/home/docker/cp-test_multinode-114485-m02_multinode-114485.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n multinode-114485 sudo cat                                       | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | /home/docker/cp-test_multinode-114485-m02_multinode-114485.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-114485 cp multinode-114485-m02:/home/docker/cp-test.txt                       | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m03:/home/docker/cp-test_multinode-114485-m02_multinode-114485-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n multinode-114485-m03 sudo cat                                   | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | /home/docker/cp-test_multinode-114485-m02_multinode-114485-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-114485 cp testdata/cp-test.txt                                                | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-114485 cp multinode-114485-m03:/home/docker/cp-test.txt                       | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile345453774/001/cp-test_multinode-114485-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-114485 cp multinode-114485-m03:/home/docker/cp-test.txt                       | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485:/home/docker/cp-test_multinode-114485-m03_multinode-114485.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n multinode-114485 sudo cat                                       | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | /home/docker/cp-test_multinode-114485-m03_multinode-114485.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-114485 cp multinode-114485-m03:/home/docker/cp-test.txt                       | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m02:/home/docker/cp-test_multinode-114485-m03_multinode-114485-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n multinode-114485-m02 sudo cat                                   | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | /home/docker/cp-test_multinode-114485-m03_multinode-114485-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-114485 node stop m03                                                          | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	| node    | multinode-114485 node start                                                             | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:59 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-114485                                                                | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:59 UTC |                     |
	| stop    | -p multinode-114485                                                                     | multinode-114485 | jenkins | v1.33.1 | 20 May 24 13:59 UTC |                     |
	| start   | -p multinode-114485                                                                     | multinode-114485 | jenkins | v1.33.1 | 20 May 24 14:01 UTC | 20 May 24 14:04 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-114485                                                                | multinode-114485 | jenkins | v1.33.1 | 20 May 24 14:04 UTC |                     |
	| node    | multinode-114485 node delete                                                            | multinode-114485 | jenkins | v1.33.1 | 20 May 24 14:04 UTC | 20 May 24 14:04 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-114485 stop                                                                   | multinode-114485 | jenkins | v1.33.1 | 20 May 24 14:04 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 14:01:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 14:01:05.176049  642041 out.go:291] Setting OutFile to fd 1 ...
	I0520 14:01:05.176323  642041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 14:01:05.176333  642041 out.go:304] Setting ErrFile to fd 2...
	I0520 14:01:05.176337  642041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 14:01:05.176543  642041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 14:01:05.177139  642041 out.go:298] Setting JSON to false
	I0520 14:01:05.178178  642041 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":13405,"bootTime":1716200260,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 14:01:05.178239  642041 start.go:139] virtualization: kvm guest
	I0520 14:01:05.181364  642041 out.go:177] * [multinode-114485] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 14:01:05.183580  642041 notify.go:220] Checking for updates...
	I0520 14:01:05.183590  642041 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 14:01:05.186004  642041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 14:01:05.188275  642041 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 14:01:05.190405  642041 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 14:01:05.192514  642041 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 14:01:05.194728  642041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 14:01:05.197257  642041 config.go:182] Loaded profile config "multinode-114485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:01:05.197358  642041 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 14:01:05.197760  642041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:01:05.197818  642041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:01:05.214141  642041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39945
	I0520 14:01:05.214695  642041 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:01:05.215362  642041 main.go:141] libmachine: Using API Version  1
	I0520 14:01:05.215393  642041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:01:05.215727  642041 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:01:05.215892  642041 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 14:01:05.254342  642041 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 14:01:05.256465  642041 start.go:297] selected driver: kvm2
	I0520 14:01:05.256489  642041 start.go:901] validating driver "kvm2" against &{Name:multinode-114485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-114485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:01:05.256664  642041 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 14:01:05.257034  642041 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 14:01:05.257151  642041 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 14:01:05.273751  642041 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 14:01:05.274476  642041 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 14:01:05.274533  642041 cni.go:84] Creating CNI manager for ""
	I0520 14:01:05.274542  642041 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0520 14:01:05.274604  642041 start.go:340] cluster config:
	{Name:multinode-114485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-114485 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:01:05.274782  642041 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 14:01:05.278902  642041 out.go:177] * Starting "multinode-114485" primary control-plane node in "multinode-114485" cluster
	I0520 14:01:05.281093  642041 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 14:01:05.281132  642041 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 14:01:05.281146  642041 cache.go:56] Caching tarball of preloaded images
	I0520 14:01:05.281257  642041 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 14:01:05.281272  642041 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 14:01:05.281440  642041 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/config.json ...
	I0520 14:01:05.281682  642041 start.go:360] acquireMachinesLock for multinode-114485: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 14:01:05.281764  642041 start.go:364] duration metric: took 60.785µs to acquireMachinesLock for "multinode-114485"
	I0520 14:01:05.281785  642041 start.go:96] Skipping create...Using existing machine configuration
	I0520 14:01:05.281800  642041 fix.go:54] fixHost starting: 
	I0520 14:01:05.282087  642041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:01:05.282124  642041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:01:05.297290  642041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34033
	I0520 14:01:05.298277  642041 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:01:05.299210  642041 main.go:141] libmachine: Using API Version  1
	I0520 14:01:05.299234  642041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:01:05.299602  642041 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:01:05.299833  642041 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 14:01:05.300033  642041 main.go:141] libmachine: (multinode-114485) Calling .GetState
	I0520 14:01:05.301665  642041 fix.go:112] recreateIfNeeded on multinode-114485: state=Running err=<nil>
	W0520 14:01:05.301687  642041 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 14:01:05.304297  642041 out.go:177] * Updating the running kvm2 "multinode-114485" VM ...
	I0520 14:01:05.306341  642041 machine.go:94] provisionDockerMachine start ...
	I0520 14:01:05.306369  642041 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 14:01:05.306591  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:01:05.308913  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.309410  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:01:05.309440  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.309553  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:01:05.309744  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:05.309909  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:05.310055  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:01:05.310252  642041 main.go:141] libmachine: Using SSH client type: native
	I0520 14:01:05.310425  642041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0520 14:01:05.310435  642041 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 14:01:05.426420  642041 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-114485
	
	I0520 14:01:05.426453  642041 main.go:141] libmachine: (multinode-114485) Calling .GetMachineName
	I0520 14:01:05.426713  642041 buildroot.go:166] provisioning hostname "multinode-114485"
	I0520 14:01:05.426738  642041 main.go:141] libmachine: (multinode-114485) Calling .GetMachineName
	I0520 14:01:05.426918  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:01:05.429745  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.430236  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:01:05.430276  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.430359  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:01:05.430531  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:05.430651  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:05.430781  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:01:05.430979  642041 main.go:141] libmachine: Using SSH client type: native
	I0520 14:01:05.431149  642041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0520 14:01:05.431162  642041 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-114485 && echo "multinode-114485" | sudo tee /etc/hostname
	I0520 14:01:05.568302  642041 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-114485
	
	I0520 14:01:05.568346  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:01:05.571445  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.571839  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:01:05.571867  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.572045  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:01:05.572248  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:05.572419  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:05.572588  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:01:05.572743  642041 main.go:141] libmachine: Using SSH client type: native
	I0520 14:01:05.572899  642041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0520 14:01:05.572916  642041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-114485' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-114485/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-114485' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 14:01:05.682797  642041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 14:01:05.682835  642041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 14:01:05.682857  642041 buildroot.go:174] setting up certificates
	I0520 14:01:05.682865  642041 provision.go:84] configureAuth start
	I0520 14:01:05.682874  642041 main.go:141] libmachine: (multinode-114485) Calling .GetMachineName
	I0520 14:01:05.683154  642041 main.go:141] libmachine: (multinode-114485) Calling .GetIP
	I0520 14:01:05.686071  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.686302  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:01:05.686331  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.686541  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:01:05.688925  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.689289  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:01:05.689323  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.689448  642041 provision.go:143] copyHostCerts
	I0520 14:01:05.689483  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 14:01:05.689550  642041 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 14:01:05.689571  642041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 14:01:05.689690  642041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 14:01:05.689833  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 14:01:05.689860  642041 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 14:01:05.689868  642041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 14:01:05.689913  642041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 14:01:05.689985  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 14:01:05.690009  642041 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 14:01:05.690017  642041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 14:01:05.690045  642041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 14:01:05.690111  642041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.multinode-114485 san=[127.0.0.1 192.168.39.141 localhost minikube multinode-114485]
	I0520 14:01:05.866385  642041 provision.go:177] copyRemoteCerts
	I0520 14:01:05.866474  642041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 14:01:05.866501  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:01:05.869642  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.870152  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:01:05.870180  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:05.870416  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:01:05.870623  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:05.870807  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:01:05.871005  642041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/multinode-114485/id_rsa Username:docker}
	I0520 14:01:05.955343  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 14:01:05.955423  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 14:01:05.982335  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 14:01:05.982410  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0520 14:01:06.005173  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 14:01:06.005252  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 14:01:06.028983  642041 provision.go:87] duration metric: took 346.104412ms to configureAuth
	I0520 14:01:06.029009  642041 buildroot.go:189] setting minikube options for container-runtime
	I0520 14:01:06.029221  642041 config.go:182] Loaded profile config "multinode-114485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:01:06.029314  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:01:06.032312  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:06.032898  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:01:06.032931  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:01:06.033178  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:01:06.033408  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:06.033629  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:01:06.033803  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:01:06.033995  642041 main.go:141] libmachine: Using SSH client type: native
	I0520 14:01:06.034179  642041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0520 14:01:06.034200  642041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 14:02:36.769867  642041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 14:02:36.769938  642041 machine.go:97] duration metric: took 1m31.46357262s to provisionDockerMachine
	I0520 14:02:36.769954  642041 start.go:293] postStartSetup for "multinode-114485" (driver="kvm2")
	I0520 14:02:36.769975  642041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 14:02:36.769999  642041 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 14:02:36.770379  642041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 14:02:36.770410  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:02:36.773475  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:36.773921  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:02:36.773948  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:36.774097  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:02:36.774334  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:02:36.774513  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:02:36.774665  642041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/multinode-114485/id_rsa Username:docker}
	I0520 14:02:36.861348  642041 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 14:02:36.865269  642041 command_runner.go:130] > NAME=Buildroot
	I0520 14:02:36.865294  642041 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 14:02:36.865301  642041 command_runner.go:130] > ID=buildroot
	I0520 14:02:36.865309  642041 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 14:02:36.865316  642041 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 14:02:36.865361  642041 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 14:02:36.865377  642041 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 14:02:36.865442  642041 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 14:02:36.865534  642041 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 14:02:36.865547  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /etc/ssl/certs/6098672.pem
	I0520 14:02:36.865631  642041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 14:02:36.874596  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 14:02:36.897015  642041 start.go:296] duration metric: took 127.04368ms for postStartSetup
	I0520 14:02:36.897093  642041 fix.go:56] duration metric: took 1m31.615297003s for fixHost
	I0520 14:02:36.897138  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:02:36.899571  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:36.899907  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:02:36.899940  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:36.900118  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:02:36.900361  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:02:36.900515  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:02:36.900687  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:02:36.900892  642041 main.go:141] libmachine: Using SSH client type: native
	I0520 14:02:36.901078  642041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0520 14:02:36.901089  642041 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 14:02:37.010467  642041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716213756.987615858
	
	I0520 14:02:37.010491  642041 fix.go:216] guest clock: 1716213756.987615858
	I0520 14:02:37.010501  642041 fix.go:229] Guest: 2024-05-20 14:02:36.987615858 +0000 UTC Remote: 2024-05-20 14:02:36.897100023 +0000 UTC m=+91.756949501 (delta=90.515835ms)
	I0520 14:02:37.010528  642041 fix.go:200] guest clock delta is within tolerance: 90.515835ms
	I0520 14:02:37.010535  642041 start.go:83] releasing machines lock for "multinode-114485", held for 1m31.728757337s
	I0520 14:02:37.010557  642041 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 14:02:37.010844  642041 main.go:141] libmachine: (multinode-114485) Calling .GetIP
	I0520 14:02:37.012989  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:37.013425  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:02:37.013460  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:37.013635  642041 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 14:02:37.014154  642041 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 14:02:37.014371  642041 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 14:02:37.014462  642041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 14:02:37.014516  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:02:37.014616  642041 ssh_runner.go:195] Run: cat /version.json
	I0520 14:02:37.014635  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 14:02:37.016903  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:37.017311  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:02:37.017341  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:37.017368  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:37.017491  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:02:37.017666  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:02:37.017802  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:02:37.017925  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:02:37.017946  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:37.017943  642041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/multinode-114485/id_rsa Username:docker}
	I0520 14:02:37.018102  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 14:02:37.018266  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 14:02:37.018435  642041 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 14:02:37.018587  642041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/multinode-114485/id_rsa Username:docker}
	I0520 14:02:37.099083  642041 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	I0520 14:02:37.130513  642041 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	W0520 14:02:37.131359  642041 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 14:02:37.131488  642041 ssh_runner.go:195] Run: systemctl --version
	I0520 14:02:37.137215  642041 command_runner.go:130] > systemd 252 (252)
	I0520 14:02:37.137277  642041 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 14:02:37.137414  642041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 14:02:37.294809  642041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 14:02:37.300283  642041 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 14:02:37.300327  642041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 14:02:37.300376  642041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 14:02:37.309196  642041 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 14:02:37.309225  642041 start.go:494] detecting cgroup driver to use...
	I0520 14:02:37.309356  642041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 14:02:37.325558  642041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 14:02:37.338166  642041 docker.go:217] disabling cri-docker service (if available) ...
	I0520 14:02:37.338236  642041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 14:02:37.351663  642041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 14:02:37.364828  642041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 14:02:37.507102  642041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 14:02:37.651121  642041 docker.go:233] disabling docker service ...
	I0520 14:02:37.651199  642041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 14:02:37.667573  642041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 14:02:37.680896  642041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 14:02:37.818304  642041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 14:02:37.963082  642041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 14:02:37.976610  642041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 14:02:37.994083  642041 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0520 14:02:37.994612  642041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 14:02:37.994673  642041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:02:38.004754  642041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 14:02:38.004838  642041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:02:38.015127  642041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:02:38.024888  642041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:02:38.034422  642041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 14:02:38.044663  642041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:02:38.054709  642041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:02:38.065605  642041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:02:38.076094  642041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 14:02:38.085220  642041 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 14:02:38.085339  642041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 14:02:38.094475  642041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:02:38.233069  642041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 14:02:39.742685  642041 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.509573316s)
	I0520 14:02:39.742716  642041 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 14:02:39.742769  642041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 14:02:39.747790  642041 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0520 14:02:39.747812  642041 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 14:02:39.747819  642041 command_runner.go:130] > Device: 0,22	Inode: 1326        Links: 1
	I0520 14:02:39.747825  642041 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 14:02:39.747830  642041 command_runner.go:130] > Access: 2024-05-20 14:02:39.590135494 +0000
	I0520 14:02:39.747837  642041 command_runner.go:130] > Modify: 2024-05-20 14:02:39.590135494 +0000
	I0520 14:02:39.747842  642041 command_runner.go:130] > Change: 2024-05-20 14:02:39.590135494 +0000
	I0520 14:02:39.747848  642041 command_runner.go:130] >  Birth: -
	I0520 14:02:39.747883  642041 start.go:562] Will wait 60s for crictl version
	I0520 14:02:39.747946  642041 ssh_runner.go:195] Run: which crictl
	I0520 14:02:39.751678  642041 command_runner.go:130] > /usr/bin/crictl
	I0520 14:02:39.751754  642041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 14:02:39.786771  642041 command_runner.go:130] > Version:  0.1.0
	I0520 14:02:39.786797  642041 command_runner.go:130] > RuntimeName:  cri-o
	I0520 14:02:39.786805  642041 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0520 14:02:39.786812  642041 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 14:02:39.788050  642041 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 14:02:39.788150  642041 ssh_runner.go:195] Run: crio --version
	I0520 14:02:39.814493  642041 command_runner.go:130] > crio version 1.29.1
	I0520 14:02:39.814519  642041 command_runner.go:130] > Version:        1.29.1
	I0520 14:02:39.814528  642041 command_runner.go:130] > GitCommit:      unknown
	I0520 14:02:39.814534  642041 command_runner.go:130] > GitCommitDate:  unknown
	I0520 14:02:39.814540  642041 command_runner.go:130] > GitTreeState:   clean
	I0520 14:02:39.814547  642041 command_runner.go:130] > BuildDate:      2024-05-13T16:07:33Z
	I0520 14:02:39.814552  642041 command_runner.go:130] > GoVersion:      go1.21.6
	I0520 14:02:39.814558  642041 command_runner.go:130] > Compiler:       gc
	I0520 14:02:39.814565  642041 command_runner.go:130] > Platform:       linux/amd64
	I0520 14:02:39.814570  642041 command_runner.go:130] > Linkmode:       dynamic
	I0520 14:02:39.814578  642041 command_runner.go:130] > BuildTags:      
	I0520 14:02:39.814585  642041 command_runner.go:130] >   containers_image_ostree_stub
	I0520 14:02:39.814596  642041 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0520 14:02:39.814602  642041 command_runner.go:130] >   btrfs_noversion
	I0520 14:02:39.814611  642041 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0520 14:02:39.814621  642041 command_runner.go:130] >   libdm_no_deferred_remove
	I0520 14:02:39.814628  642041 command_runner.go:130] >   seccomp
	I0520 14:02:39.814645  642041 command_runner.go:130] > LDFlags:          unknown
	I0520 14:02:39.814655  642041 command_runner.go:130] > SeccompEnabled:   true
	I0520 14:02:39.814663  642041 command_runner.go:130] > AppArmorEnabled:  false
	I0520 14:02:39.815765  642041 ssh_runner.go:195] Run: crio --version
	I0520 14:02:39.841483  642041 command_runner.go:130] > crio version 1.29.1
	I0520 14:02:39.841514  642041 command_runner.go:130] > Version:        1.29.1
	I0520 14:02:39.841523  642041 command_runner.go:130] > GitCommit:      unknown
	I0520 14:02:39.841530  642041 command_runner.go:130] > GitCommitDate:  unknown
	I0520 14:02:39.841537  642041 command_runner.go:130] > GitTreeState:   clean
	I0520 14:02:39.841546  642041 command_runner.go:130] > BuildDate:      2024-05-13T16:07:33Z
	I0520 14:02:39.841553  642041 command_runner.go:130] > GoVersion:      go1.21.6
	I0520 14:02:39.841561  642041 command_runner.go:130] > Compiler:       gc
	I0520 14:02:39.841568  642041 command_runner.go:130] > Platform:       linux/amd64
	I0520 14:02:39.841578  642041 command_runner.go:130] > Linkmode:       dynamic
	I0520 14:02:39.841586  642041 command_runner.go:130] > BuildTags:      
	I0520 14:02:39.841596  642041 command_runner.go:130] >   containers_image_ostree_stub
	I0520 14:02:39.841606  642041 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0520 14:02:39.841616  642041 command_runner.go:130] >   btrfs_noversion
	I0520 14:02:39.841626  642041 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0520 14:02:39.841635  642041 command_runner.go:130] >   libdm_no_deferred_remove
	I0520 14:02:39.841641  642041 command_runner.go:130] >   seccomp
	I0520 14:02:39.841651  642041 command_runner.go:130] > LDFlags:          unknown
	I0520 14:02:39.841658  642041 command_runner.go:130] > SeccompEnabled:   true
	I0520 14:02:39.841668  642041 command_runner.go:130] > AppArmorEnabled:  false
	I0520 14:02:39.849023  642041 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 14:02:39.851282  642041 main.go:141] libmachine: (multinode-114485) Calling .GetIP
	I0520 14:02:39.854086  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:39.854503  642041 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 14:02:39.854535  642041 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 14:02:39.854742  642041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 14:02:39.860812  642041 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0520 14:02:39.861237  642041 kubeadm.go:877] updating cluster {Name:multinode-114485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-114485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 14:02:39.861390  642041 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 14:02:39.861448  642041 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 14:02:39.897694  642041 command_runner.go:130] > {
	I0520 14:02:39.897718  642041 command_runner.go:130] >   "images": [
	I0520 14:02:39.897722  642041 command_runner.go:130] >     {
	I0520 14:02:39.897730  642041 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0520 14:02:39.897735  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.897741  642041 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0520 14:02:39.897744  642041 command_runner.go:130] >       ],
	I0520 14:02:39.897748  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.897757  642041 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0520 14:02:39.897767  642041 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0520 14:02:39.897772  642041 command_runner.go:130] >       ],
	I0520 14:02:39.897779  642041 command_runner.go:130] >       "size": "65291810",
	I0520 14:02:39.897787  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.897792  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.897807  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.897815  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.897821  642041 command_runner.go:130] >     },
	I0520 14:02:39.897826  642041 command_runner.go:130] >     {
	I0520 14:02:39.897836  642041 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0520 14:02:39.897842  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.897848  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0520 14:02:39.897852  642041 command_runner.go:130] >       ],
	I0520 14:02:39.897856  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.897863  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0520 14:02:39.897870  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0520 14:02:39.897875  642041 command_runner.go:130] >       ],
	I0520 14:02:39.897880  642041 command_runner.go:130] >       "size": "1363676",
	I0520 14:02:39.897887  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.897905  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.897915  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.897923  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.897934  642041 command_runner.go:130] >     },
	I0520 14:02:39.897942  642041 command_runner.go:130] >     {
	I0520 14:02:39.897949  642041 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0520 14:02:39.897956  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.897961  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0520 14:02:39.897967  642041 command_runner.go:130] >       ],
	I0520 14:02:39.897972  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.897988  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0520 14:02:39.898004  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0520 14:02:39.898014  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898024  642041 command_runner.go:130] >       "size": "31470524",
	I0520 14:02:39.898034  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.898042  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.898046  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.898052  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.898056  642041 command_runner.go:130] >     },
	I0520 14:02:39.898061  642041 command_runner.go:130] >     {
	I0520 14:02:39.898068  642041 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0520 14:02:39.898074  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.898081  642041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0520 14:02:39.898090  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898103  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.898119  642041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0520 14:02:39.898145  642041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0520 14:02:39.898155  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898160  642041 command_runner.go:130] >       "size": "61245718",
	I0520 14:02:39.898167  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.898171  642041 command_runner.go:130] >       "username": "nonroot",
	I0520 14:02:39.898178  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.898184  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.898192  642041 command_runner.go:130] >     },
	I0520 14:02:39.898200  642041 command_runner.go:130] >     {
	I0520 14:02:39.898214  642041 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0520 14:02:39.898224  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.898235  642041 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0520 14:02:39.898244  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898253  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.898269  642041 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0520 14:02:39.898281  642041 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0520 14:02:39.898289  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898295  642041 command_runner.go:130] >       "size": "150779692",
	I0520 14:02:39.898305  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.898312  642041 command_runner.go:130] >         "value": "0"
	I0520 14:02:39.898321  642041 command_runner.go:130] >       },
	I0520 14:02:39.898332  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.898341  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.898350  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.898358  642041 command_runner.go:130] >     },
	I0520 14:02:39.898366  642041 command_runner.go:130] >     {
	I0520 14:02:39.898379  642041 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0520 14:02:39.898386  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.898393  642041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0520 14:02:39.898402  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898412  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.898427  642041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0520 14:02:39.898441  642041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0520 14:02:39.898450  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898459  642041 command_runner.go:130] >       "size": "117601759",
	I0520 14:02:39.898466  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.898471  642041 command_runner.go:130] >         "value": "0"
	I0520 14:02:39.898479  642041 command_runner.go:130] >       },
	I0520 14:02:39.898489  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.898499  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.898508  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.898517  642041 command_runner.go:130] >     },
	I0520 14:02:39.898525  642041 command_runner.go:130] >     {
	I0520 14:02:39.898535  642041 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0520 14:02:39.898545  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.898554  642041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0520 14:02:39.898561  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898567  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.898584  642041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0520 14:02:39.898598  642041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0520 14:02:39.898609  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898618  642041 command_runner.go:130] >       "size": "112170310",
	I0520 14:02:39.898627  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.898636  642041 command_runner.go:130] >         "value": "0"
	I0520 14:02:39.898642  642041 command_runner.go:130] >       },
	I0520 14:02:39.898647  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.898657  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.898666  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.898672  642041 command_runner.go:130] >     },
	I0520 14:02:39.898682  642041 command_runner.go:130] >     {
	I0520 14:02:39.898695  642041 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0520 14:02:39.898705  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.898716  642041 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0520 14:02:39.898724  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898733  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.898760  642041 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0520 14:02:39.898772  642041 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0520 14:02:39.898777  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898784  642041 command_runner.go:130] >       "size": "85933465",
	I0520 14:02:39.898790  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.898796  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.898803  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.898811  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.898816  642041 command_runner.go:130] >     },
	I0520 14:02:39.898821  642041 command_runner.go:130] >     {
	I0520 14:02:39.898827  642041 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0520 14:02:39.898835  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.898844  642041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0520 14:02:39.898849  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898856  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.898870  642041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0520 14:02:39.898885  642041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0520 14:02:39.898893  642041 command_runner.go:130] >       ],
	I0520 14:02:39.898900  642041 command_runner.go:130] >       "size": "63026504",
	I0520 14:02:39.898909  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.898913  642041 command_runner.go:130] >         "value": "0"
	I0520 14:02:39.898926  642041 command_runner.go:130] >       },
	I0520 14:02:39.898954  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.898962  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.898968  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.898973  642041 command_runner.go:130] >     },
	I0520 14:02:39.898978  642041 command_runner.go:130] >     {
	I0520 14:02:39.898987  642041 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0520 14:02:39.898996  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.899006  642041 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0520 14:02:39.899011  642041 command_runner.go:130] >       ],
	I0520 14:02:39.899020  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.899033  642041 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0520 14:02:39.899047  642041 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0520 14:02:39.899055  642041 command_runner.go:130] >       ],
	I0520 14:02:39.899061  642041 command_runner.go:130] >       "size": "750414",
	I0520 14:02:39.899071  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.899078  642041 command_runner.go:130] >         "value": "65535"
	I0520 14:02:39.899087  642041 command_runner.go:130] >       },
	I0520 14:02:39.899093  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.899103  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.899110  642041 command_runner.go:130] >       "pinned": true
	I0520 14:02:39.899118  642041 command_runner.go:130] >     }
	I0520 14:02:39.899124  642041 command_runner.go:130] >   ]
	I0520 14:02:39.899133  642041 command_runner.go:130] > }
	I0520 14:02:39.899365  642041 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 14:02:39.899381  642041 crio.go:433] Images already preloaded, skipping extraction
	I0520 14:02:39.899432  642041 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 14:02:39.930098  642041 command_runner.go:130] > {
	I0520 14:02:39.930127  642041 command_runner.go:130] >   "images": [
	I0520 14:02:39.930133  642041 command_runner.go:130] >     {
	I0520 14:02:39.930146  642041 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0520 14:02:39.930153  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.930162  642041 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0520 14:02:39.930167  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930175  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.930189  642041 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0520 14:02:39.930203  642041 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0520 14:02:39.930213  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930223  642041 command_runner.go:130] >       "size": "65291810",
	I0520 14:02:39.930232  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.930241  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.930259  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.930269  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.930275  642041 command_runner.go:130] >     },
	I0520 14:02:39.930284  642041 command_runner.go:130] >     {
	I0520 14:02:39.930298  642041 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0520 14:02:39.930308  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.930319  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0520 14:02:39.930327  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930338  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.930352  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0520 14:02:39.930367  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0520 14:02:39.930375  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930384  642041 command_runner.go:130] >       "size": "1363676",
	I0520 14:02:39.930392  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.930403  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.930412  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.930420  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.930428  642041 command_runner.go:130] >     },
	I0520 14:02:39.930436  642041 command_runner.go:130] >     {
	I0520 14:02:39.930444  642041 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0520 14:02:39.930453  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.930461  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0520 14:02:39.930469  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930478  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.930491  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0520 14:02:39.930506  642041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0520 14:02:39.930514  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930524  642041 command_runner.go:130] >       "size": "31470524",
	I0520 14:02:39.930533  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.930543  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.930548  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.930557  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.930564  642041 command_runner.go:130] >     },
	I0520 14:02:39.930571  642041 command_runner.go:130] >     {
	I0520 14:02:39.930580  642041 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0520 14:02:39.930589  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.930600  642041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0520 14:02:39.930608  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930617  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.930630  642041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0520 14:02:39.930648  642041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0520 14:02:39.930657  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930667  642041 command_runner.go:130] >       "size": "61245718",
	I0520 14:02:39.930676  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.930686  642041 command_runner.go:130] >       "username": "nonroot",
	I0520 14:02:39.930695  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.930704  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.930713  642041 command_runner.go:130] >     },
	I0520 14:02:39.930719  642041 command_runner.go:130] >     {
	I0520 14:02:39.930732  642041 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0520 14:02:39.930741  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.930751  642041 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0520 14:02:39.930759  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930769  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.930783  642041 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0520 14:02:39.930796  642041 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0520 14:02:39.930804  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930810  642041 command_runner.go:130] >       "size": "150779692",
	I0520 14:02:39.930819  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.930828  642041 command_runner.go:130] >         "value": "0"
	I0520 14:02:39.930837  642041 command_runner.go:130] >       },
	I0520 14:02:39.930846  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.930857  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.930867  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.930875  642041 command_runner.go:130] >     },
	I0520 14:02:39.930883  642041 command_runner.go:130] >     {
	I0520 14:02:39.930895  642041 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0520 14:02:39.930904  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.930914  642041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0520 14:02:39.930922  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930938  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.930953  642041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0520 14:02:39.930967  642041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0520 14:02:39.930975  642041 command_runner.go:130] >       ],
	I0520 14:02:39.930981  642041 command_runner.go:130] >       "size": "117601759",
	I0520 14:02:39.930988  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.930993  642041 command_runner.go:130] >         "value": "0"
	I0520 14:02:39.931002  642041 command_runner.go:130] >       },
	I0520 14:02:39.931007  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.931014  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.931023  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.931030  642041 command_runner.go:130] >     },
	I0520 14:02:39.931038  642041 command_runner.go:130] >     {
	I0520 14:02:39.931046  642041 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0520 14:02:39.931054  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.931065  642041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0520 14:02:39.931071  642041 command_runner.go:130] >       ],
	I0520 14:02:39.931077  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.931091  642041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0520 14:02:39.931105  642041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0520 14:02:39.931114  642041 command_runner.go:130] >       ],
	I0520 14:02:39.931124  642041 command_runner.go:130] >       "size": "112170310",
	I0520 14:02:39.931134  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.931143  642041 command_runner.go:130] >         "value": "0"
	I0520 14:02:39.931151  642041 command_runner.go:130] >       },
	I0520 14:02:39.931161  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.931170  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.931179  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.931188  642041 command_runner.go:130] >     },
	I0520 14:02:39.931197  642041 command_runner.go:130] >     {
	I0520 14:02:39.931210  642041 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0520 14:02:39.931223  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.931235  642041 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0520 14:02:39.931243  642041 command_runner.go:130] >       ],
	I0520 14:02:39.931250  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.931275  642041 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0520 14:02:39.931288  642041 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0520 14:02:39.931294  642041 command_runner.go:130] >       ],
	I0520 14:02:39.931299  642041 command_runner.go:130] >       "size": "85933465",
	I0520 14:02:39.931305  642041 command_runner.go:130] >       "uid": null,
	I0520 14:02:39.931309  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.931315  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.931319  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.931325  642041 command_runner.go:130] >     },
	I0520 14:02:39.931329  642041 command_runner.go:130] >     {
	I0520 14:02:39.931337  642041 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0520 14:02:39.931344  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.931349  642041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0520 14:02:39.931354  642041 command_runner.go:130] >       ],
	I0520 14:02:39.931359  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.931368  642041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0520 14:02:39.931377  642041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0520 14:02:39.931383  642041 command_runner.go:130] >       ],
	I0520 14:02:39.931387  642041 command_runner.go:130] >       "size": "63026504",
	I0520 14:02:39.931394  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.931397  642041 command_runner.go:130] >         "value": "0"
	I0520 14:02:39.931402  642041 command_runner.go:130] >       },
	I0520 14:02:39.931405  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.931411  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.931424  642041 command_runner.go:130] >       "pinned": false
	I0520 14:02:39.931429  642041 command_runner.go:130] >     },
	I0520 14:02:39.931434  642041 command_runner.go:130] >     {
	I0520 14:02:39.931443  642041 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0520 14:02:39.931450  642041 command_runner.go:130] >       "repoTags": [
	I0520 14:02:39.931457  642041 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0520 14:02:39.931462  642041 command_runner.go:130] >       ],
	I0520 14:02:39.931469  642041 command_runner.go:130] >       "repoDigests": [
	I0520 14:02:39.931481  642041 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0520 14:02:39.931492  642041 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0520 14:02:39.931495  642041 command_runner.go:130] >       ],
	I0520 14:02:39.931500  642041 command_runner.go:130] >       "size": "750414",
	I0520 14:02:39.931503  642041 command_runner.go:130] >       "uid": {
	I0520 14:02:39.931507  642041 command_runner.go:130] >         "value": "65535"
	I0520 14:02:39.931510  642041 command_runner.go:130] >       },
	I0520 14:02:39.931517  642041 command_runner.go:130] >       "username": "",
	I0520 14:02:39.931524  642041 command_runner.go:130] >       "spec": null,
	I0520 14:02:39.931532  642041 command_runner.go:130] >       "pinned": true
	I0520 14:02:39.931540  642041 command_runner.go:130] >     }
	I0520 14:02:39.931545  642041 command_runner.go:130] >   ]
	I0520 14:02:39.931549  642041 command_runner.go:130] > }
	I0520 14:02:39.931784  642041 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 14:02:39.931803  642041 cache_images.go:84] Images are preloaded, skipping loading
	I0520 14:02:39.931812  642041 kubeadm.go:928] updating node { 192.168.39.141 8443 v1.30.1 crio true true} ...
	I0520 14:02:39.931923  642041 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-114485 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-114485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 14:02:39.931996  642041 ssh_runner.go:195] Run: crio config
	I0520 14:02:39.966426  642041 command_runner.go:130] ! time="2024-05-20 14:02:39.943707117Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0520 14:02:39.973837  642041 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0520 14:02:39.981673  642041 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0520 14:02:39.981700  642041 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0520 14:02:39.981710  642041 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0520 14:02:39.981714  642041 command_runner.go:130] > #
	I0520 14:02:39.981724  642041 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0520 14:02:39.981730  642041 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0520 14:02:39.981737  642041 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0520 14:02:39.981743  642041 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0520 14:02:39.981749  642041 command_runner.go:130] > # reload'.
	I0520 14:02:39.981755  642041 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0520 14:02:39.981764  642041 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0520 14:02:39.981770  642041 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0520 14:02:39.981779  642041 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0520 14:02:39.981783  642041 command_runner.go:130] > [crio]
	I0520 14:02:39.981796  642041 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0520 14:02:39.981807  642041 command_runner.go:130] > # containers images, in this directory.
	I0520 14:02:39.981814  642041 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0520 14:02:39.981831  642041 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0520 14:02:39.981841  642041 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0520 14:02:39.981853  642041 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0520 14:02:39.981860  642041 command_runner.go:130] > # imagestore = ""
	I0520 14:02:39.981865  642041 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0520 14:02:39.981874  642041 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0520 14:02:39.981878  642041 command_runner.go:130] > storage_driver = "overlay"
	I0520 14:02:39.981887  642041 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0520 14:02:39.981896  642041 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0520 14:02:39.981906  642041 command_runner.go:130] > storage_option = [
	I0520 14:02:39.981913  642041 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0520 14:02:39.981925  642041 command_runner.go:130] > ]
	I0520 14:02:39.981939  642041 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0520 14:02:39.981951  642041 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0520 14:02:39.981961  642041 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0520 14:02:39.981968  642041 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0520 14:02:39.981975  642041 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0520 14:02:39.981979  642041 command_runner.go:130] > # always happen on a node reboot
	I0520 14:02:39.981984  642041 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0520 14:02:39.982001  642041 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0520 14:02:39.982015  642041 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0520 14:02:39.982028  642041 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0520 14:02:39.982040  642041 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0520 14:02:39.982054  642041 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0520 14:02:39.982069  642041 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0520 14:02:39.982078  642041 command_runner.go:130] > # internal_wipe = true
	I0520 14:02:39.982087  642041 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0520 14:02:39.982099  642041 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0520 14:02:39.982110  642041 command_runner.go:130] > # internal_repair = false
	I0520 14:02:39.982119  642041 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0520 14:02:39.982131  642041 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0520 14:02:39.982143  642041 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0520 14:02:39.982155  642041 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0520 14:02:39.982166  642041 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0520 14:02:39.982176  642041 command_runner.go:130] > [crio.api]
	I0520 14:02:39.982184  642041 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0520 14:02:39.982194  642041 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0520 14:02:39.982203  642041 command_runner.go:130] > # IP address on which the stream server will listen.
	I0520 14:02:39.982213  642041 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0520 14:02:39.982225  642041 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0520 14:02:39.982236  642041 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0520 14:02:39.982245  642041 command_runner.go:130] > # stream_port = "0"
	I0520 14:02:39.982254  642041 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0520 14:02:39.982261  642041 command_runner.go:130] > # stream_enable_tls = false
	I0520 14:02:39.982273  642041 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0520 14:02:39.982282  642041 command_runner.go:130] > # stream_idle_timeout = ""
	I0520 14:02:39.982288  642041 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0520 14:02:39.982298  642041 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0520 14:02:39.982303  642041 command_runner.go:130] > # minutes.
	I0520 14:02:39.982307  642041 command_runner.go:130] > # stream_tls_cert = ""
	I0520 14:02:39.982314  642041 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0520 14:02:39.982319  642041 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0520 14:02:39.982328  642041 command_runner.go:130] > # stream_tls_key = ""
	I0520 14:02:39.982334  642041 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0520 14:02:39.982342  642041 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0520 14:02:39.982357  642041 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0520 14:02:39.982363  642041 command_runner.go:130] > # stream_tls_ca = ""
	I0520 14:02:39.982370  642041 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0520 14:02:39.982377  642041 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0520 14:02:39.982383  642041 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0520 14:02:39.982390  642041 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0520 14:02:39.982396  642041 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0520 14:02:39.982402  642041 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0520 14:02:39.982405  642041 command_runner.go:130] > [crio.runtime]
	I0520 14:02:39.982411  642041 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0520 14:02:39.982419  642041 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0520 14:02:39.982422  642041 command_runner.go:130] > # "nofile=1024:2048"
	I0520 14:02:39.982431  642041 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0520 14:02:39.982434  642041 command_runner.go:130] > # default_ulimits = [
	I0520 14:02:39.982440  642041 command_runner.go:130] > # ]
	I0520 14:02:39.982446  642041 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0520 14:02:39.982452  642041 command_runner.go:130] > # no_pivot = false
	I0520 14:02:39.982457  642041 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0520 14:02:39.982465  642041 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0520 14:02:39.982470  642041 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0520 14:02:39.982478  642041 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0520 14:02:39.982485  642041 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0520 14:02:39.982493  642041 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0520 14:02:39.982497  642041 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0520 14:02:39.982504  642041 command_runner.go:130] > # Cgroup setting for conmon
	I0520 14:02:39.982510  642041 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0520 14:02:39.982516  642041 command_runner.go:130] > conmon_cgroup = "pod"
	I0520 14:02:39.982522  642041 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0520 14:02:39.982530  642041 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0520 14:02:39.982536  642041 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0520 14:02:39.982543  642041 command_runner.go:130] > conmon_env = [
	I0520 14:02:39.982549  642041 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0520 14:02:39.982554  642041 command_runner.go:130] > ]
	I0520 14:02:39.982559  642041 command_runner.go:130] > # Additional environment variables to set for all the
	I0520 14:02:39.982566  642041 command_runner.go:130] > # containers. These are overridden if set in the
	I0520 14:02:39.982571  642041 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0520 14:02:39.982577  642041 command_runner.go:130] > # default_env = [
	I0520 14:02:39.982580  642041 command_runner.go:130] > # ]
	I0520 14:02:39.982585  642041 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0520 14:02:39.982594  642041 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0520 14:02:39.982598  642041 command_runner.go:130] > # selinux = false
	I0520 14:02:39.982604  642041 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0520 14:02:39.982615  642041 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0520 14:02:39.982622  642041 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0520 14:02:39.982626  642041 command_runner.go:130] > # seccomp_profile = ""
	I0520 14:02:39.982632  642041 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0520 14:02:39.982640  642041 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0520 14:02:39.982646  642041 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0520 14:02:39.982652  642041 command_runner.go:130] > # which might increase security.
	I0520 14:02:39.982656  642041 command_runner.go:130] > # This option is currently deprecated,
	I0520 14:02:39.982665  642041 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0520 14:02:39.982669  642041 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0520 14:02:39.982676  642041 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0520 14:02:39.982683  642041 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0520 14:02:39.982690  642041 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0520 14:02:39.982696  642041 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0520 14:02:39.982700  642041 command_runner.go:130] > # This option supports live configuration reload.
	I0520 14:02:39.982704  642041 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0520 14:02:39.982710  642041 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0520 14:02:39.982716  642041 command_runner.go:130] > # the cgroup blockio controller.
	I0520 14:02:39.982720  642041 command_runner.go:130] > # blockio_config_file = ""
	I0520 14:02:39.982729  642041 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0520 14:02:39.982733  642041 command_runner.go:130] > # blockio parameters.
	I0520 14:02:39.982739  642041 command_runner.go:130] > # blockio_reload = false
	I0520 14:02:39.982746  642041 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0520 14:02:39.982751  642041 command_runner.go:130] > # irqbalance daemon.
	I0520 14:02:39.982756  642041 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0520 14:02:39.982764  642041 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0520 14:02:39.982770  642041 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0520 14:02:39.982778  642041 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0520 14:02:39.982784  642041 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0520 14:02:39.982792  642041 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0520 14:02:39.982797  642041 command_runner.go:130] > # This option supports live configuration reload.
	I0520 14:02:39.982801  642041 command_runner.go:130] > # rdt_config_file = ""
	I0520 14:02:39.982810  642041 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0520 14:02:39.982814  642041 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0520 14:02:39.982834  642041 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0520 14:02:39.982841  642041 command_runner.go:130] > # separate_pull_cgroup = ""
	I0520 14:02:39.982847  642041 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0520 14:02:39.982855  642041 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0520 14:02:39.982859  642041 command_runner.go:130] > # will be added.
	I0520 14:02:39.982864  642041 command_runner.go:130] > # default_capabilities = [
	I0520 14:02:39.982867  642041 command_runner.go:130] > # 	"CHOWN",
	I0520 14:02:39.982873  642041 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0520 14:02:39.982877  642041 command_runner.go:130] > # 	"FSETID",
	I0520 14:02:39.982880  642041 command_runner.go:130] > # 	"FOWNER",
	I0520 14:02:39.982884  642041 command_runner.go:130] > # 	"SETGID",
	I0520 14:02:39.982887  642041 command_runner.go:130] > # 	"SETUID",
	I0520 14:02:39.982891  642041 command_runner.go:130] > # 	"SETPCAP",
	I0520 14:02:39.982895  642041 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0520 14:02:39.982901  642041 command_runner.go:130] > # 	"KILL",
	I0520 14:02:39.982904  642041 command_runner.go:130] > # ]
	I0520 14:02:39.982915  642041 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0520 14:02:39.982928  642041 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0520 14:02:39.982937  642041 command_runner.go:130] > # add_inheritable_capabilities = false
	I0520 14:02:39.982948  642041 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0520 14:02:39.982960  642041 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0520 14:02:39.982969  642041 command_runner.go:130] > default_sysctls = [
	I0520 14:02:39.982976  642041 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0520 14:02:39.982980  642041 command_runner.go:130] > ]
	I0520 14:02:39.982987  642041 command_runner.go:130] > # List of devices on the host that a
	I0520 14:02:39.982998  642041 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0520 14:02:39.983004  642041 command_runner.go:130] > # allowed_devices = [
	I0520 14:02:39.983008  642041 command_runner.go:130] > # 	"/dev/fuse",
	I0520 14:02:39.983012  642041 command_runner.go:130] > # ]
	I0520 14:02:39.983017  642041 command_runner.go:130] > # List of additional devices. specified as
	I0520 14:02:39.983033  642041 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0520 14:02:39.983040  642041 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0520 14:02:39.983046  642041 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0520 14:02:39.983053  642041 command_runner.go:130] > # additional_devices = [
	I0520 14:02:39.983056  642041 command_runner.go:130] > # ]
	I0520 14:02:39.983064  642041 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0520 14:02:39.983068  642041 command_runner.go:130] > # cdi_spec_dirs = [
	I0520 14:02:39.983072  642041 command_runner.go:130] > # 	"/etc/cdi",
	I0520 14:02:39.983076  642041 command_runner.go:130] > # 	"/var/run/cdi",
	I0520 14:02:39.983082  642041 command_runner.go:130] > # ]
	I0520 14:02:39.983088  642041 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0520 14:02:39.983096  642041 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0520 14:02:39.983099  642041 command_runner.go:130] > # Defaults to false.
	I0520 14:02:39.983106  642041 command_runner.go:130] > # device_ownership_from_security_context = false
	I0520 14:02:39.983112  642041 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0520 14:02:39.983120  642041 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0520 14:02:39.983124  642041 command_runner.go:130] > # hooks_dir = [
	I0520 14:02:39.983130  642041 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0520 14:02:39.983134  642041 command_runner.go:130] > # ]
	I0520 14:02:39.983140  642041 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0520 14:02:39.983148  642041 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0520 14:02:39.983153  642041 command_runner.go:130] > # its default mounts from the following two files:
	I0520 14:02:39.983156  642041 command_runner.go:130] > #
	I0520 14:02:39.983164  642041 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0520 14:02:39.983177  642041 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0520 14:02:39.983189  642041 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0520 14:02:39.983198  642041 command_runner.go:130] > #
	I0520 14:02:39.983207  642041 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0520 14:02:39.983220  642041 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0520 14:02:39.983233  642041 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0520 14:02:39.983247  642041 command_runner.go:130] > #      only add mounts it finds in this file.
	I0520 14:02:39.983255  642041 command_runner.go:130] > #
	I0520 14:02:39.983262  642041 command_runner.go:130] > # default_mounts_file = ""
	I0520 14:02:39.983273  642041 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0520 14:02:39.983282  642041 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0520 14:02:39.983289  642041 command_runner.go:130] > pids_limit = 1024
	I0520 14:02:39.983295  642041 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0520 14:02:39.983303  642041 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0520 14:02:39.983310  642041 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0520 14:02:39.983320  642041 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0520 14:02:39.983326  642041 command_runner.go:130] > # log_size_max = -1
	I0520 14:02:39.983333  642041 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0520 14:02:39.983337  642041 command_runner.go:130] > # log_to_journald = false
	I0520 14:02:39.983344  642041 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0520 14:02:39.983351  642041 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0520 14:02:39.983356  642041 command_runner.go:130] > # Path to directory for container attach sockets.
	I0520 14:02:39.983363  642041 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0520 14:02:39.983369  642041 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0520 14:02:39.983375  642041 command_runner.go:130] > # bind_mount_prefix = ""
	I0520 14:02:39.983380  642041 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0520 14:02:39.983386  642041 command_runner.go:130] > # read_only = false
	I0520 14:02:39.983392  642041 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0520 14:02:39.983400  642041 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0520 14:02:39.983404  642041 command_runner.go:130] > # live configuration reload.
	I0520 14:02:39.983408  642041 command_runner.go:130] > # log_level = "info"
	I0520 14:02:39.983414  642041 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0520 14:02:39.983421  642041 command_runner.go:130] > # This option supports live configuration reload.
	I0520 14:02:39.983424  642041 command_runner.go:130] > # log_filter = ""
	I0520 14:02:39.983435  642041 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0520 14:02:39.983445  642041 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0520 14:02:39.983449  642041 command_runner.go:130] > # separated by comma.
	I0520 14:02:39.983456  642041 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 14:02:39.983462  642041 command_runner.go:130] > # uid_mappings = ""
	I0520 14:02:39.983468  642041 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0520 14:02:39.983477  642041 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0520 14:02:39.983481  642041 command_runner.go:130] > # separated by comma.
	I0520 14:02:39.983490  642041 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 14:02:39.983496  642041 command_runner.go:130] > # gid_mappings = ""
	I0520 14:02:39.983502  642041 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0520 14:02:39.983510  642041 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0520 14:02:39.983516  642041 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0520 14:02:39.983526  642041 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 14:02:39.983530  642041 command_runner.go:130] > # minimum_mappable_uid = -1
	I0520 14:02:39.983539  642041 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0520 14:02:39.983545  642041 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0520 14:02:39.983553  642041 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0520 14:02:39.983560  642041 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 14:02:39.983567  642041 command_runner.go:130] > # minimum_mappable_gid = -1
	I0520 14:02:39.983573  642041 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0520 14:02:39.983578  642041 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0520 14:02:39.983586  642041 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0520 14:02:39.983590  642041 command_runner.go:130] > # ctr_stop_timeout = 30
	I0520 14:02:39.983597  642041 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0520 14:02:39.983603  642041 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0520 14:02:39.983610  642041 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0520 14:02:39.983615  642041 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0520 14:02:39.983619  642041 command_runner.go:130] > drop_infra_ctr = false
	I0520 14:02:39.983625  642041 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0520 14:02:39.983633  642041 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0520 14:02:39.983640  642041 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0520 14:02:39.983646  642041 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0520 14:02:39.983652  642041 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0520 14:02:39.983659  642041 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0520 14:02:39.983665  642041 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0520 14:02:39.983669  642041 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0520 14:02:39.983677  642041 command_runner.go:130] > # shared_cpuset = ""
	I0520 14:02:39.983682  642041 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0520 14:02:39.983689  642041 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0520 14:02:39.983693  642041 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0520 14:02:39.983702  642041 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0520 14:02:39.983706  642041 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0520 14:02:39.983714  642041 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0520 14:02:39.983721  642041 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0520 14:02:39.983728  642041 command_runner.go:130] > # enable_criu_support = false
	I0520 14:02:39.983732  642041 command_runner.go:130] > # Enable/disable the generation of the container,
	I0520 14:02:39.983740  642041 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0520 14:02:39.983744  642041 command_runner.go:130] > # enable_pod_events = false
	I0520 14:02:39.983753  642041 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0520 14:02:39.983759  642041 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0520 14:02:39.983766  642041 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0520 14:02:39.983770  642041 command_runner.go:130] > # default_runtime = "runc"
	I0520 14:02:39.983777  642041 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0520 14:02:39.983785  642041 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0520 14:02:39.983796  642041 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0520 14:02:39.983804  642041 command_runner.go:130] > # creation as a file is not desired either.
	I0520 14:02:39.983812  642041 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0520 14:02:39.983819  642041 command_runner.go:130] > # the hostname is being managed dynamically.
	I0520 14:02:39.983823  642041 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0520 14:02:39.983827  642041 command_runner.go:130] > # ]
	I0520 14:02:39.983833  642041 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0520 14:02:39.983842  642041 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0520 14:02:39.983848  642041 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0520 14:02:39.983855  642041 command_runner.go:130] > # Each entry in the table should follow the format:
	I0520 14:02:39.983858  642041 command_runner.go:130] > #
	I0520 14:02:39.983865  642041 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0520 14:02:39.983870  642041 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0520 14:02:39.983899  642041 command_runner.go:130] > # runtime_type = "oci"
	I0520 14:02:39.983908  642041 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0520 14:02:39.983912  642041 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0520 14:02:39.983916  642041 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0520 14:02:39.983920  642041 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0520 14:02:39.983924  642041 command_runner.go:130] > # monitor_env = []
	I0520 14:02:39.983929  642041 command_runner.go:130] > # privileged_without_host_devices = false
	I0520 14:02:39.983935  642041 command_runner.go:130] > # allowed_annotations = []
	I0520 14:02:39.983940  642041 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0520 14:02:39.983946  642041 command_runner.go:130] > # Where:
	I0520 14:02:39.983951  642041 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0520 14:02:39.983960  642041 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0520 14:02:39.983967  642041 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0520 14:02:39.983975  642041 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0520 14:02:39.983980  642041 command_runner.go:130] > #   in $PATH.
	I0520 14:02:39.983987  642041 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0520 14:02:39.983992  642041 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0520 14:02:39.984000  642041 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0520 14:02:39.984004  642041 command_runner.go:130] > #   state.
	I0520 14:02:39.984009  642041 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0520 14:02:39.984015  642041 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0520 14:02:39.984025  642041 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0520 14:02:39.984033  642041 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0520 14:02:39.984039  642041 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0520 14:02:39.984048  642041 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0520 14:02:39.984052  642041 command_runner.go:130] > #   The currently recognized values are:
	I0520 14:02:39.984058  642041 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0520 14:02:39.984068  642041 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0520 14:02:39.984074  642041 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0520 14:02:39.984082  642041 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0520 14:02:39.984088  642041 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0520 14:02:39.984097  642041 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0520 14:02:39.984104  642041 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0520 14:02:39.984112  642041 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0520 14:02:39.984118  642041 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0520 14:02:39.984126  642041 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0520 14:02:39.984133  642041 command_runner.go:130] > #   deprecated option "conmon".
	I0520 14:02:39.984142  642041 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0520 14:02:39.984146  642041 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0520 14:02:39.984154  642041 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0520 14:02:39.984159  642041 command_runner.go:130] > #   should be moved to the container's cgroup
	I0520 14:02:39.984171  642041 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0520 14:02:39.984182  642041 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0520 14:02:39.984193  642041 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0520 14:02:39.984205  642041 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0520 14:02:39.984212  642041 command_runner.go:130] > #
	I0520 14:02:39.984220  642041 command_runner.go:130] > # Using the seccomp notifier feature:
	I0520 14:02:39.984229  642041 command_runner.go:130] > #
	I0520 14:02:39.984240  642041 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0520 14:02:39.984251  642041 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0520 14:02:39.984254  642041 command_runner.go:130] > #
	I0520 14:02:39.984260  642041 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0520 14:02:39.984269  642041 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0520 14:02:39.984272  642041 command_runner.go:130] > #
	I0520 14:02:39.984282  642041 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0520 14:02:39.984286  642041 command_runner.go:130] > # feature.
	I0520 14:02:39.984291  642041 command_runner.go:130] > #
	I0520 14:02:39.984297  642041 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0520 14:02:39.984304  642041 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0520 14:02:39.984311  642041 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0520 14:02:39.984318  642041 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0520 14:02:39.984324  642041 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0520 14:02:39.984330  642041 command_runner.go:130] > #
	I0520 14:02:39.984335  642041 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0520 14:02:39.984343  642041 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0520 14:02:39.984346  642041 command_runner.go:130] > #
	I0520 14:02:39.984352  642041 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0520 14:02:39.984360  642041 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0520 14:02:39.984363  642041 command_runner.go:130] > #
	I0520 14:02:39.984369  642041 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0520 14:02:39.984377  642041 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0520 14:02:39.984381  642041 command_runner.go:130] > # limitation.
	I0520 14:02:39.984388  642041 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0520 14:02:39.984392  642041 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0520 14:02:39.984398  642041 command_runner.go:130] > runtime_type = "oci"
	I0520 14:02:39.984402  642041 command_runner.go:130] > runtime_root = "/run/runc"
	I0520 14:02:39.984408  642041 command_runner.go:130] > runtime_config_path = ""
	I0520 14:02:39.984413  642041 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0520 14:02:39.984421  642041 command_runner.go:130] > monitor_cgroup = "pod"
	I0520 14:02:39.984425  642041 command_runner.go:130] > monitor_exec_cgroup = ""
	I0520 14:02:39.984431  642041 command_runner.go:130] > monitor_env = [
	I0520 14:02:39.984437  642041 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0520 14:02:39.984443  642041 command_runner.go:130] > ]
	I0520 14:02:39.984448  642041 command_runner.go:130] > privileged_without_host_devices = false
	I0520 14:02:39.984457  642041 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0520 14:02:39.984462  642041 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0520 14:02:39.984471  642041 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0520 14:02:39.984478  642041 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0520 14:02:39.984488  642041 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0520 14:02:39.984496  642041 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0520 14:02:39.984504  642041 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0520 14:02:39.984513  642041 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0520 14:02:39.984520  642041 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0520 14:02:39.984529  642041 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0520 14:02:39.984532  642041 command_runner.go:130] > # Example:
	I0520 14:02:39.984539  642041 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0520 14:02:39.984544  642041 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0520 14:02:39.984549  642041 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0520 14:02:39.984554  642041 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0520 14:02:39.984559  642041 command_runner.go:130] > # cpuset = 0
	I0520 14:02:39.984563  642041 command_runner.go:130] > # cpushares = "0-1"
	I0520 14:02:39.984566  642041 command_runner.go:130] > # Where:
	I0520 14:02:39.984574  642041 command_runner.go:130] > # The workload name is workload-type.
	I0520 14:02:39.984581  642041 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0520 14:02:39.984588  642041 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0520 14:02:39.984594  642041 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0520 14:02:39.984602  642041 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0520 14:02:39.984607  642041 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0520 14:02:39.984615  642041 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0520 14:02:39.984621  642041 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0520 14:02:39.984627  642041 command_runner.go:130] > # Default value is set to true
	I0520 14:02:39.984632  642041 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0520 14:02:39.984637  642041 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0520 14:02:39.984641  642041 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0520 14:02:39.984645  642041 command_runner.go:130] > # Default value is set to 'false'
	I0520 14:02:39.984649  642041 command_runner.go:130] > # disable_hostport_mapping = false
	I0520 14:02:39.984654  642041 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0520 14:02:39.984657  642041 command_runner.go:130] > #
	I0520 14:02:39.984663  642041 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0520 14:02:39.984668  642041 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0520 14:02:39.984674  642041 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0520 14:02:39.984680  642041 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0520 14:02:39.984685  642041 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0520 14:02:39.984688  642041 command_runner.go:130] > [crio.image]
	I0520 14:02:39.984693  642041 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0520 14:02:39.984697  642041 command_runner.go:130] > # default_transport = "docker://"
	I0520 14:02:39.984703  642041 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0520 14:02:39.984709  642041 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0520 14:02:39.984712  642041 command_runner.go:130] > # global_auth_file = ""
	I0520 14:02:39.984717  642041 command_runner.go:130] > # The image used to instantiate infra containers.
	I0520 14:02:39.984721  642041 command_runner.go:130] > # This option supports live configuration reload.
	I0520 14:02:39.984725  642041 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0520 14:02:39.984731  642041 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0520 14:02:39.984736  642041 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0520 14:02:39.984740  642041 command_runner.go:130] > # This option supports live configuration reload.
	I0520 14:02:39.984744  642041 command_runner.go:130] > # pause_image_auth_file = ""
	I0520 14:02:39.984749  642041 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0520 14:02:39.984754  642041 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0520 14:02:39.984759  642041 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0520 14:02:39.984764  642041 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0520 14:02:39.984768  642041 command_runner.go:130] > # pause_command = "/pause"
	I0520 14:02:39.984773  642041 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0520 14:02:39.984778  642041 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0520 14:02:39.984784  642041 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0520 14:02:39.984790  642041 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0520 14:02:39.984795  642041 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0520 14:02:39.984800  642041 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0520 14:02:39.984804  642041 command_runner.go:130] > # pinned_images = [
	I0520 14:02:39.984812  642041 command_runner.go:130] > # ]
	I0520 14:02:39.984817  642041 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0520 14:02:39.984823  642041 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0520 14:02:39.984829  642041 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0520 14:02:39.984834  642041 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0520 14:02:39.984839  642041 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0520 14:02:39.984842  642041 command_runner.go:130] > # signature_policy = ""
	I0520 14:02:39.984847  642041 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0520 14:02:39.984853  642041 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0520 14:02:39.984859  642041 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0520 14:02:39.984865  642041 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0520 14:02:39.984870  642041 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0520 14:02:39.984877  642041 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0520 14:02:39.984883  642041 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0520 14:02:39.984892  642041 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0520 14:02:39.984895  642041 command_runner.go:130] > # changing them here.
	I0520 14:02:39.984902  642041 command_runner.go:130] > # insecure_registries = [
	I0520 14:02:39.984905  642041 command_runner.go:130] > # ]
	I0520 14:02:39.984911  642041 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0520 14:02:39.984917  642041 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0520 14:02:39.984921  642041 command_runner.go:130] > # image_volumes = "mkdir"
	I0520 14:02:39.984925  642041 command_runner.go:130] > # Temporary directory to use for storing big files
	I0520 14:02:39.984931  642041 command_runner.go:130] > # big_files_temporary_dir = ""
	I0520 14:02:39.984937  642041 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0520 14:02:39.984940  642041 command_runner.go:130] > # CNI plugins.
	I0520 14:02:39.984944  642041 command_runner.go:130] > [crio.network]
	I0520 14:02:39.984949  642041 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0520 14:02:39.984959  642041 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0520 14:02:39.984963  642041 command_runner.go:130] > # cni_default_network = ""
	I0520 14:02:39.984968  642041 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0520 14:02:39.984975  642041 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0520 14:02:39.984980  642041 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0520 14:02:39.984986  642041 command_runner.go:130] > # plugin_dirs = [
	I0520 14:02:39.984989  642041 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0520 14:02:39.984992  642041 command_runner.go:130] > # ]
	I0520 14:02:39.984998  642041 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0520 14:02:39.985003  642041 command_runner.go:130] > [crio.metrics]
	I0520 14:02:39.985008  642041 command_runner.go:130] > # Globally enable or disable metrics support.
	I0520 14:02:39.985011  642041 command_runner.go:130] > enable_metrics = true
	I0520 14:02:39.985018  642041 command_runner.go:130] > # Specify enabled metrics collectors.
	I0520 14:02:39.985026  642041 command_runner.go:130] > # Per default all metrics are enabled.
	I0520 14:02:39.985034  642041 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0520 14:02:39.985040  642041 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0520 14:02:39.985048  642041 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0520 14:02:39.985053  642041 command_runner.go:130] > # metrics_collectors = [
	I0520 14:02:39.985059  642041 command_runner.go:130] > # 	"operations",
	I0520 14:02:39.985063  642041 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0520 14:02:39.985067  642041 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0520 14:02:39.985071  642041 command_runner.go:130] > # 	"operations_errors",
	I0520 14:02:39.985077  642041 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0520 14:02:39.985081  642041 command_runner.go:130] > # 	"image_pulls_by_name",
	I0520 14:02:39.985085  642041 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0520 14:02:39.985092  642041 command_runner.go:130] > # 	"image_pulls_failures",
	I0520 14:02:39.985096  642041 command_runner.go:130] > # 	"image_pulls_successes",
	I0520 14:02:39.985101  642041 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0520 14:02:39.985106  642041 command_runner.go:130] > # 	"image_layer_reuse",
	I0520 14:02:39.985113  642041 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0520 14:02:39.985117  642041 command_runner.go:130] > # 	"containers_oom_total",
	I0520 14:02:39.985123  642041 command_runner.go:130] > # 	"containers_oom",
	I0520 14:02:39.985126  642041 command_runner.go:130] > # 	"processes_defunct",
	I0520 14:02:39.985132  642041 command_runner.go:130] > # 	"operations_total",
	I0520 14:02:39.985137  642041 command_runner.go:130] > # 	"operations_latency_seconds",
	I0520 14:02:39.985141  642041 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0520 14:02:39.985145  642041 command_runner.go:130] > # 	"operations_errors_total",
	I0520 14:02:39.985150  642041 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0520 14:02:39.985155  642041 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0520 14:02:39.985162  642041 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0520 14:02:39.985166  642041 command_runner.go:130] > # 	"image_pulls_success_total",
	I0520 14:02:39.985169  642041 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0520 14:02:39.985176  642041 command_runner.go:130] > # 	"containers_oom_count_total",
	I0520 14:02:39.985180  642041 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0520 14:02:39.985187  642041 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0520 14:02:39.985190  642041 command_runner.go:130] > # ]
	I0520 14:02:39.985197  642041 command_runner.go:130] > # The port on which the metrics server will listen.
	I0520 14:02:39.985201  642041 command_runner.go:130] > # metrics_port = 9090
	I0520 14:02:39.985205  642041 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0520 14:02:39.985209  642041 command_runner.go:130] > # metrics_socket = ""
	I0520 14:02:39.985216  642041 command_runner.go:130] > # The certificate for the secure metrics server.
	I0520 14:02:39.985228  642041 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0520 14:02:39.985238  642041 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0520 14:02:39.985262  642041 command_runner.go:130] > # certificate on any modification event.
	I0520 14:02:39.985272  642041 command_runner.go:130] > # metrics_cert = ""
	I0520 14:02:39.985281  642041 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0520 14:02:39.985292  642041 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0520 14:02:39.985301  642041 command_runner.go:130] > # metrics_key = ""
	I0520 14:02:39.985312  642041 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0520 14:02:39.985321  642041 command_runner.go:130] > [crio.tracing]
	I0520 14:02:39.985326  642041 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0520 14:02:39.985330  642041 command_runner.go:130] > # enable_tracing = false
	I0520 14:02:39.985336  642041 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0520 14:02:39.985342  642041 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0520 14:02:39.985349  642041 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0520 14:02:39.985354  642041 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0520 14:02:39.985358  642041 command_runner.go:130] > # CRI-O NRI configuration.
	I0520 14:02:39.985363  642041 command_runner.go:130] > [crio.nri]
	I0520 14:02:39.985367  642041 command_runner.go:130] > # Globally enable or disable NRI.
	I0520 14:02:39.985373  642041 command_runner.go:130] > # enable_nri = false
	I0520 14:02:39.985377  642041 command_runner.go:130] > # NRI socket to listen on.
	I0520 14:02:39.985383  642041 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0520 14:02:39.985388  642041 command_runner.go:130] > # NRI plugin directory to use.
	I0520 14:02:39.985395  642041 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0520 14:02:39.985400  642041 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0520 14:02:39.985407  642041 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0520 14:02:39.985412  642041 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0520 14:02:39.985419  642041 command_runner.go:130] > # nri_disable_connections = false
	I0520 14:02:39.985424  642041 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0520 14:02:39.985429  642041 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0520 14:02:39.985435  642041 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0520 14:02:39.985442  642041 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0520 14:02:39.985447  642041 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0520 14:02:39.985451  642041 command_runner.go:130] > [crio.stats]
	I0520 14:02:39.985458  642041 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0520 14:02:39.985463  642041 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0520 14:02:39.985468  642041 command_runner.go:130] > # stats_collection_period = 0
	I0520 14:02:39.985629  642041 cni.go:84] Creating CNI manager for ""
	I0520 14:02:39.985643  642041 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0520 14:02:39.985662  642041 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 14:02:39.985686  642041 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.141 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-114485 NodeName:multinode-114485 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 14:02:39.985819  642041 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-114485"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 14:02:39.985883  642041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 14:02:39.995744  642041 command_runner.go:130] > kubeadm
	I0520 14:02:39.995772  642041 command_runner.go:130] > kubectl
	I0520 14:02:39.995778  642041 command_runner.go:130] > kubelet
	I0520 14:02:39.995821  642041 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 14:02:39.995887  642041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 14:02:40.005600  642041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0520 14:02:40.023112  642041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 14:02:40.039646  642041 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0520 14:02:40.055898  642041 ssh_runner.go:195] Run: grep 192.168.39.141	control-plane.minikube.internal$ /etc/hosts
	I0520 14:02:40.059544  642041 command_runner.go:130] > 192.168.39.141	control-plane.minikube.internal
	I0520 14:02:40.059615  642041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:02:40.205637  642041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 14:02:40.222709  642041 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485 for IP: 192.168.39.141
	I0520 14:02:40.222738  642041 certs.go:194] generating shared ca certs ...
	I0520 14:02:40.222760  642041 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:02:40.222947  642041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 14:02:40.223019  642041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 14:02:40.223051  642041 certs.go:256] generating profile certs ...
	I0520 14:02:40.223167  642041 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/client.key
	I0520 14:02:40.223242  642041 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/apiserver.key.1cd91c1b
	I0520 14:02:40.223303  642041 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/proxy-client.key
	I0520 14:02:40.223318  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 14:02:40.223333  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 14:02:40.223350  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 14:02:40.223366  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 14:02:40.223383  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 14:02:40.223409  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 14:02:40.223425  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 14:02:40.223441  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 14:02:40.223505  642041 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem (1338 bytes)
	W0520 14:02:40.223541  642041 certs.go:480] ignoring /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867_empty.pem, impossibly tiny 0 bytes
	I0520 14:02:40.223556  642041 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 14:02:40.223585  642041 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 14:02:40.223616  642041 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 14:02:40.223649  642041 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 14:02:40.223698  642041 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 14:02:40.223735  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> /usr/share/ca-certificates/6098672.pem
	I0520 14:02:40.223753  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:02:40.223770  642041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem -> /usr/share/ca-certificates/609867.pem
	I0520 14:02:40.224643  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 14:02:40.250630  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 14:02:40.273706  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 14:02:40.296666  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 14:02:40.319223  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 14:02:40.342137  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 14:02:40.364834  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 14:02:40.387371  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/multinode-114485/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 14:02:40.409308  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /usr/share/ca-certificates/6098672.pem (1708 bytes)
	I0520 14:02:40.431654  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 14:02:40.453720  642041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem --> /usr/share/ca-certificates/609867.pem (1338 bytes)
	I0520 14:02:40.475773  642041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 14:02:40.491827  642041 ssh_runner.go:195] Run: openssl version
	I0520 14:02:40.497259  642041 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 14:02:40.497347  642041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6098672.pem && ln -fs /usr/share/ca-certificates/6098672.pem /etc/ssl/certs/6098672.pem"
	I0520 14:02:40.507160  642041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6098672.pem
	I0520 14:02:40.511226  642041 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 14:02:40.511426  642041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 14:02:40.511485  642041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6098672.pem
	I0520 14:02:40.516633  642041 command_runner.go:130] > 3ec20f2e
	I0520 14:02:40.516749  642041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6098672.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 14:02:40.525859  642041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 14:02:40.536264  642041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:02:40.540427  642041 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:02:40.540470  642041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:02:40.540532  642041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:02:40.545744  642041 command_runner.go:130] > b5213941
	I0520 14:02:40.545835  642041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 14:02:40.555089  642041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/609867.pem && ln -fs /usr/share/ca-certificates/609867.pem /etc/ssl/certs/609867.pem"
	I0520 14:02:40.565731  642041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/609867.pem
	I0520 14:02:40.569779  642041 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 14:02:40.569965  642041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 14:02:40.570020  642041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/609867.pem
	I0520 14:02:40.575155  642041 command_runner.go:130] > 51391683
	I0520 14:02:40.575278  642041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/609867.pem /etc/ssl/certs/51391683.0"
	I0520 14:02:40.584461  642041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 14:02:40.588716  642041 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 14:02:40.588747  642041 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0520 14:02:40.588756  642041 command_runner.go:130] > Device: 253,1	Inode: 5245462     Links: 1
	I0520 14:02:40.588765  642041 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 14:02:40.588773  642041 command_runner.go:130] > Access: 2024-05-20 13:56:30.849266718 +0000
	I0520 14:02:40.588787  642041 command_runner.go:130] > Modify: 2024-05-20 13:56:30.849266718 +0000
	I0520 14:02:40.588799  642041 command_runner.go:130] > Change: 2024-05-20 13:56:30.849266718 +0000
	I0520 14:02:40.588807  642041 command_runner.go:130] >  Birth: 2024-05-20 13:56:30.849266718 +0000
	I0520 14:02:40.588869  642041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 14:02:40.594128  642041 command_runner.go:130] > Certificate will not expire
	I0520 14:02:40.594308  642041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 14:02:40.599424  642041 command_runner.go:130] > Certificate will not expire
	I0520 14:02:40.599573  642041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 14:02:40.604801  642041 command_runner.go:130] > Certificate will not expire
	I0520 14:02:40.604871  642041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 14:02:40.610006  642041 command_runner.go:130] > Certificate will not expire
	I0520 14:02:40.610177  642041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 14:02:40.615543  642041 command_runner.go:130] > Certificate will not expire
	I0520 14:02:40.615613  642041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 14:02:40.620802  642041 command_runner.go:130] > Certificate will not expire
	I0520 14:02:40.620876  642041 kubeadm.go:391] StartCluster: {Name:multinode-114485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-114485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:02:40.620994  642041 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 14:02:40.621074  642041 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 14:02:40.661215  642041 command_runner.go:130] > 1ff5148b0c6ef582137978525a24e4eb722c2f50fe7c0bce9bc4577c82911ed7
	I0520 14:02:40.661240  642041 command_runner.go:130] > a6d4b37910552f514029a560da97b4d30702e290efd8222ae7d93f001e8fc6cd
	I0520 14:02:40.661263  642041 command_runner.go:130] > 40573632694f53397011e5740b786034914eacd4130a49af46d4dd3552733834
	I0520 14:02:40.661269  642041 command_runner.go:130] > 402715f11c169f573b8dbc6e0a4941404dcb6550b6dcfd79e497161a808b6c3a
	I0520 14:02:40.661274  642041 command_runner.go:130] > b880898a006546d6c8d22ee52fe792bbba8137e189a7b8d2ed95c35965a308be
	I0520 14:02:40.661279  642041 command_runner.go:130] > 68b22a0039a12e8d4779947ae72bff4b25bce62b4a69f504b852f52ac295d5a2
	I0520 14:02:40.661284  642041 command_runner.go:130] > 724a0f328d82967c2d2ac56e446c78255da4665ff63ffeb7b0c53a553f1a5eea
	I0520 14:02:40.661314  642041 command_runner.go:130] > 08459b873db5fd2313db35ebf67b4d487eac4df351de620a38af8b2d730ee07c
	I0520 14:02:40.661338  642041 cri.go:89] found id: "1ff5148b0c6ef582137978525a24e4eb722c2f50fe7c0bce9bc4577c82911ed7"
	I0520 14:02:40.661346  642041 cri.go:89] found id: "a6d4b37910552f514029a560da97b4d30702e290efd8222ae7d93f001e8fc6cd"
	I0520 14:02:40.661349  642041 cri.go:89] found id: "40573632694f53397011e5740b786034914eacd4130a49af46d4dd3552733834"
	I0520 14:02:40.661355  642041 cri.go:89] found id: "402715f11c169f573b8dbc6e0a4941404dcb6550b6dcfd79e497161a808b6c3a"
	I0520 14:02:40.661358  642041 cri.go:89] found id: "b880898a006546d6c8d22ee52fe792bbba8137e189a7b8d2ed95c35965a308be"
	I0520 14:02:40.661361  642041 cri.go:89] found id: "68b22a0039a12e8d4779947ae72bff4b25bce62b4a69f504b852f52ac295d5a2"
	I0520 14:02:40.661366  642041 cri.go:89] found id: "724a0f328d82967c2d2ac56e446c78255da4665ff63ffeb7b0c53a553f1a5eea"
	I0520 14:02:40.661369  642041 cri.go:89] found id: "08459b873db5fd2313db35ebf67b4d487eac4df351de620a38af8b2d730ee07c"
	I0520 14:02:40.661371  642041 cri.go:89] found id: ""
	I0520 14:02:40.661414  642041 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.697289929Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e0d8236-a1f0-4218-a0bc-497013522603 name=/runtime.v1.RuntimeService/Version
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.698554458Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7ec6e3a-7ef2-448c-9bb5-3961c161d84a name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.699017582Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716213989698992263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7ec6e3a-7ef2-448c-9bb5-3961c161d84a name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.699604167Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c125c98-49fd-4218-9d34-63c7ce3d1d4a name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.699694677Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c125c98-49fd-4218-9d34-63c7ce3d1d4a name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.700110708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b5aa659743d06e72e553df5fb166941a4a09164354a9aba9ebd6df158b0a2c0,PodSandboxId:f77cab11bbedc326759c97360323edea68d34cbe5ac03da2ec3c5c1f43feafc4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716213800395288528,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,},Annotations:map[string]string{io.kubernetes.container.hash: 106b706f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df75c1bbda42f260827efa8676d14fa9eddbe9b2816ce1febe4c9dd46115eb19,PodSandboxId:e11526823350804b06706aa9ef92e5212c4b01bfd21f0fee70b1424c6057fc0a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716213766873056977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 5212307c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10846343b933740c1503c452acbbf4b4d1d8fc8079f65473c0596dcba48576f7,PodSandboxId:2c7c485c8e9047928b532493574043ec610fbc5b67a00c0f500d5b8a7b9195aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716213766819036127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,},Annotations:map[string]string{io.kubernetes.container.hash: 1473217a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:222a378af63cfd0b3dd2634d41788c29fe8273a65a7a6c03ad3324256eda986f,PodSandboxId:e877889b86a8f9ff3e23838b21889a311bf9e8fe40aa6b7d8d325fae4449ea40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716213766724036277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed-3115f4ae124d,},Annotations:map[string]
string{io.kubernetes.container.hash: 10056e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8aa664fbcdbdad4bcd080fd54d7101b141b67f9fb50c0a61a0bd40d1cb24878,PodSandboxId:52ff335575c31b24def77852dcaed5194fa475f155e6d03b761c5604fe76ca0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716213766734459135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{io.ku
bernetes.container.hash: d1a502da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552538ae34f5ba25a3d81e16bb92d73f926891c616aba284f99dc8383269059d,PodSandboxId:e8eb7950cef0cb0f3b440962fa55845ffe4199d9e308f13b03b77d8b892baab0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716213762837851891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47a632823b64cd2a6a6a5667d9594e7369a7385699677aed10c88fcce49c6a1,PodSandboxId:1519ac6f1d6decbe68b1ba789182ffdcf51589bb058b0f4ad596b894c06ce5b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716213762839873064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,},Annotations:map[string]string{io.kub
ernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ba8ed72509ec2cf2738715a3ae02d183f652a0fdf046452899b1ca36c8d1519,PodSandboxId:995ff54780bf97e426d8e6a022d196bdf501017a8d001fbce488f37e4d44fe88,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716213762787837814,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,},Annotations:map[string]string{io.kubernetes.container.hash: 7423fb5c,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0950bfbde431e07391bb72043d6a6f3e1141698eea80fe52436c452f26a03cd,PodSandboxId:6be4e58e0ac0ab92c7c2aa004e6cb1a62a6ba003171818845b731b483ff3db95,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716213762748483252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,},Annotations:map[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a65b004f5928e7c57c08a060d3b21ebfa7b51556b043dfb32d65be7c9f2ad0b,PodSandboxId:4b8717ffa777a554895361c3d98768f45d847a5f60a70427644ebadb7ad8e183,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716213462731382845,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,},Annotations:map[string]string{io.kubernetes.container.hash: 106b706f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ff5148b0c6ef582137978525a24e4eb722c2f50fe7c0bce9bc4577c82911ed7,PodSandboxId:0a83d9634f503160df44dbfec482c89bc010bc1fba09e4f243dbb0734139ff65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716213416445134004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,},Annotations:map[string]string{io.kubernetes.container.hash: 1473217a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d4b37910552f514029a560da97b4d30702e290efd8222ae7d93f001e8fc6cd,PodSandboxId:b5fc4aba67a0c2a17c5fb357f0dea7c3d48c2256f81ef3528bc056471785bdb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716213416357973958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{io.kubernetes.container.hash: d1a502da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40573632694f53397011e5740b786034914eacd4130a49af46d4dd3552733834,PodSandboxId:7cab01574fea9f8306f5099eca8737a6b7511280f1111c11f4e58c7159f6c7d0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716213414025508308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 5212307c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402715f11c169f573b8dbc6e0a4941404dcb6550b6dcfd79e497161a808b6c3a,PodSandboxId:a10d72941c5420edc8219ac630212eaf4276d7815ff265ce0015268ace796849,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716213413772333796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed
-3115f4ae124d,},Annotations:map[string]string{io.kubernetes.container.hash: 10056e2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b880898a006546d6c8d22ee52fe792bbba8137e189a7b8d2ed95c35965a308be,PodSandboxId:399f171341d410bf6c0e8abc16be9a017b0059f237940969ee1ebf4782d5c399,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716213394300306427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,}
,Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68b22a0039a12e8d4779947ae72bff4b25bce62b4a69f504b852f52ac295d5a2,PodSandboxId:b63f4ee24680ba0e7d053ef95e1e566e12af314391f351f41255197696918189,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716213394270367335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,},Annotations:map[string]string{io.kubernetes.
container.hash: 7423fb5c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724a0f328d82967c2d2ac56e446c78255da4665ff63ffeb7b0c53a553f1a5eea,PodSandboxId:67e21022c7a3415dba1d15b31d0c206eda7eedf04c4524d232a063fa381e188a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716213394268329983,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,},Annotations:map[string]string{io.kubernetes.container.hash:
c125c438,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08459b873db5fd2313db35ebf67b4d487eac4df351de620a38af8b2d730ee07c,PodSandboxId:7a27d81d93d24acfcb1ef9947ee060f6f93aee219bc5565ffc273c247a279099,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716213394199325606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c125c98-49fd-4218-9d34-63c7ce3d1d4a name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.738279359Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=382c62f6-2240-4505-b15c-709d1dcfa5d1 name=/runtime.v1.RuntimeService/Version
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.738368916Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=382c62f6-2240-4505-b15c-709d1dcfa5d1 name=/runtime.v1.RuntimeService/Version
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.740033262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80005c9a-e480-4859-85e9-72cde8bb7731 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.740433111Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716213989740409583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80005c9a-e480-4859-85e9-72cde8bb7731 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.741022625Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0392c4d9-33fd-48fe-850b-5344efd0fe00 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.741087490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0392c4d9-33fd-48fe-850b-5344efd0fe00 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.741460149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b5aa659743d06e72e553df5fb166941a4a09164354a9aba9ebd6df158b0a2c0,PodSandboxId:f77cab11bbedc326759c97360323edea68d34cbe5ac03da2ec3c5c1f43feafc4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716213800395288528,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,},Annotations:map[string]string{io.kubernetes.container.hash: 106b706f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df75c1bbda42f260827efa8676d14fa9eddbe9b2816ce1febe4c9dd46115eb19,PodSandboxId:e11526823350804b06706aa9ef92e5212c4b01bfd21f0fee70b1424c6057fc0a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716213766873056977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 5212307c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10846343b933740c1503c452acbbf4b4d1d8fc8079f65473c0596dcba48576f7,PodSandboxId:2c7c485c8e9047928b532493574043ec610fbc5b67a00c0f500d5b8a7b9195aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716213766819036127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,},Annotations:map[string]string{io.kubernetes.container.hash: 1473217a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:222a378af63cfd0b3dd2634d41788c29fe8273a65a7a6c03ad3324256eda986f,PodSandboxId:e877889b86a8f9ff3e23838b21889a311bf9e8fe40aa6b7d8d325fae4449ea40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716213766724036277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed-3115f4ae124d,},Annotations:map[string]
string{io.kubernetes.container.hash: 10056e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8aa664fbcdbdad4bcd080fd54d7101b141b67f9fb50c0a61a0bd40d1cb24878,PodSandboxId:52ff335575c31b24def77852dcaed5194fa475f155e6d03b761c5604fe76ca0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716213766734459135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{io.ku
bernetes.container.hash: d1a502da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552538ae34f5ba25a3d81e16bb92d73f926891c616aba284f99dc8383269059d,PodSandboxId:e8eb7950cef0cb0f3b440962fa55845ffe4199d9e308f13b03b77d8b892baab0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716213762837851891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47a632823b64cd2a6a6a5667d9594e7369a7385699677aed10c88fcce49c6a1,PodSandboxId:1519ac6f1d6decbe68b1ba789182ffdcf51589bb058b0f4ad596b894c06ce5b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716213762839873064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,},Annotations:map[string]string{io.kub
ernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ba8ed72509ec2cf2738715a3ae02d183f652a0fdf046452899b1ca36c8d1519,PodSandboxId:995ff54780bf97e426d8e6a022d196bdf501017a8d001fbce488f37e4d44fe88,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716213762787837814,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,},Annotations:map[string]string{io.kubernetes.container.hash: 7423fb5c,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0950bfbde431e07391bb72043d6a6f3e1141698eea80fe52436c452f26a03cd,PodSandboxId:6be4e58e0ac0ab92c7c2aa004e6cb1a62a6ba003171818845b731b483ff3db95,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716213762748483252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,},Annotations:map[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a65b004f5928e7c57c08a060d3b21ebfa7b51556b043dfb32d65be7c9f2ad0b,PodSandboxId:4b8717ffa777a554895361c3d98768f45d847a5f60a70427644ebadb7ad8e183,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716213462731382845,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,},Annotations:map[string]string{io.kubernetes.container.hash: 106b706f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ff5148b0c6ef582137978525a24e4eb722c2f50fe7c0bce9bc4577c82911ed7,PodSandboxId:0a83d9634f503160df44dbfec482c89bc010bc1fba09e4f243dbb0734139ff65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716213416445134004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,},Annotations:map[string]string{io.kubernetes.container.hash: 1473217a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d4b37910552f514029a560da97b4d30702e290efd8222ae7d93f001e8fc6cd,PodSandboxId:b5fc4aba67a0c2a17c5fb357f0dea7c3d48c2256f81ef3528bc056471785bdb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716213416357973958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{io.kubernetes.container.hash: d1a502da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40573632694f53397011e5740b786034914eacd4130a49af46d4dd3552733834,PodSandboxId:7cab01574fea9f8306f5099eca8737a6b7511280f1111c11f4e58c7159f6c7d0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716213414025508308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 5212307c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402715f11c169f573b8dbc6e0a4941404dcb6550b6dcfd79e497161a808b6c3a,PodSandboxId:a10d72941c5420edc8219ac630212eaf4276d7815ff265ce0015268ace796849,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716213413772333796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed
-3115f4ae124d,},Annotations:map[string]string{io.kubernetes.container.hash: 10056e2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b880898a006546d6c8d22ee52fe792bbba8137e189a7b8d2ed95c35965a308be,PodSandboxId:399f171341d410bf6c0e8abc16be9a017b0059f237940969ee1ebf4782d5c399,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716213394300306427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,}
,Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68b22a0039a12e8d4779947ae72bff4b25bce62b4a69f504b852f52ac295d5a2,PodSandboxId:b63f4ee24680ba0e7d053ef95e1e566e12af314391f351f41255197696918189,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716213394270367335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,},Annotations:map[string]string{io.kubernetes.
container.hash: 7423fb5c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724a0f328d82967c2d2ac56e446c78255da4665ff63ffeb7b0c53a553f1a5eea,PodSandboxId:67e21022c7a3415dba1d15b31d0c206eda7eedf04c4524d232a063fa381e188a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716213394268329983,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,},Annotations:map[string]string{io.kubernetes.container.hash:
c125c438,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08459b873db5fd2313db35ebf67b4d487eac4df351de620a38af8b2d730ee07c,PodSandboxId:7a27d81d93d24acfcb1ef9947ee060f6f93aee219bc5565ffc273c247a279099,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716213394199325606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0392c4d9-33fd-48fe-850b-5344efd0fe00 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.779720086Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ca228a2-6dd3-42aa-9142-69969cce375c name=/runtime.v1.RuntimeService/Version
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.779849389Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ca228a2-6dd3-42aa-9142-69969cce375c name=/runtime.v1.RuntimeService/Version
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.781144162Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5847c945-dd33-4f3f-b11b-eea798a301e2 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.781686957Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716213989781663557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5847c945-dd33-4f3f-b11b-eea798a301e2 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.782216582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7757cc10-d898-4f44-bcbc-883d1d4e8383 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.782266991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7757cc10-d898-4f44-bcbc-883d1d4e8383 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.782613550Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b5aa659743d06e72e553df5fb166941a4a09164354a9aba9ebd6df158b0a2c0,PodSandboxId:f77cab11bbedc326759c97360323edea68d34cbe5ac03da2ec3c5c1f43feafc4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716213800395288528,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,},Annotations:map[string]string{io.kubernetes.container.hash: 106b706f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df75c1bbda42f260827efa8676d14fa9eddbe9b2816ce1febe4c9dd46115eb19,PodSandboxId:e11526823350804b06706aa9ef92e5212c4b01bfd21f0fee70b1424c6057fc0a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716213766873056977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 5212307c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10846343b933740c1503c452acbbf4b4d1d8fc8079f65473c0596dcba48576f7,PodSandboxId:2c7c485c8e9047928b532493574043ec610fbc5b67a00c0f500d5b8a7b9195aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716213766819036127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,},Annotations:map[string]string{io.kubernetes.container.hash: 1473217a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:222a378af63cfd0b3dd2634d41788c29fe8273a65a7a6c03ad3324256eda986f,PodSandboxId:e877889b86a8f9ff3e23838b21889a311bf9e8fe40aa6b7d8d325fae4449ea40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716213766724036277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed-3115f4ae124d,},Annotations:map[string]
string{io.kubernetes.container.hash: 10056e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8aa664fbcdbdad4bcd080fd54d7101b141b67f9fb50c0a61a0bd40d1cb24878,PodSandboxId:52ff335575c31b24def77852dcaed5194fa475f155e6d03b761c5604fe76ca0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716213766734459135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{io.ku
bernetes.container.hash: d1a502da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552538ae34f5ba25a3d81e16bb92d73f926891c616aba284f99dc8383269059d,PodSandboxId:e8eb7950cef0cb0f3b440962fa55845ffe4199d9e308f13b03b77d8b892baab0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716213762837851891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47a632823b64cd2a6a6a5667d9594e7369a7385699677aed10c88fcce49c6a1,PodSandboxId:1519ac6f1d6decbe68b1ba789182ffdcf51589bb058b0f4ad596b894c06ce5b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716213762839873064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,},Annotations:map[string]string{io.kub
ernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ba8ed72509ec2cf2738715a3ae02d183f652a0fdf046452899b1ca36c8d1519,PodSandboxId:995ff54780bf97e426d8e6a022d196bdf501017a8d001fbce488f37e4d44fe88,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716213762787837814,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,},Annotations:map[string]string{io.kubernetes.container.hash: 7423fb5c,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0950bfbde431e07391bb72043d6a6f3e1141698eea80fe52436c452f26a03cd,PodSandboxId:6be4e58e0ac0ab92c7c2aa004e6cb1a62a6ba003171818845b731b483ff3db95,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716213762748483252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,},Annotations:map[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a65b004f5928e7c57c08a060d3b21ebfa7b51556b043dfb32d65be7c9f2ad0b,PodSandboxId:4b8717ffa777a554895361c3d98768f45d847a5f60a70427644ebadb7ad8e183,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716213462731382845,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,},Annotations:map[string]string{io.kubernetes.container.hash: 106b706f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ff5148b0c6ef582137978525a24e4eb722c2f50fe7c0bce9bc4577c82911ed7,PodSandboxId:0a83d9634f503160df44dbfec482c89bc010bc1fba09e4f243dbb0734139ff65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716213416445134004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,},Annotations:map[string]string{io.kubernetes.container.hash: 1473217a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d4b37910552f514029a560da97b4d30702e290efd8222ae7d93f001e8fc6cd,PodSandboxId:b5fc4aba67a0c2a17c5fb357f0dea7c3d48c2256f81ef3528bc056471785bdb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716213416357973958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{io.kubernetes.container.hash: d1a502da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40573632694f53397011e5740b786034914eacd4130a49af46d4dd3552733834,PodSandboxId:7cab01574fea9f8306f5099eca8737a6b7511280f1111c11f4e58c7159f6c7d0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716213414025508308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 5212307c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402715f11c169f573b8dbc6e0a4941404dcb6550b6dcfd79e497161a808b6c3a,PodSandboxId:a10d72941c5420edc8219ac630212eaf4276d7815ff265ce0015268ace796849,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716213413772333796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed
-3115f4ae124d,},Annotations:map[string]string{io.kubernetes.container.hash: 10056e2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b880898a006546d6c8d22ee52fe792bbba8137e189a7b8d2ed95c35965a308be,PodSandboxId:399f171341d410bf6c0e8abc16be9a017b0059f237940969ee1ebf4782d5c399,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716213394300306427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,}
,Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68b22a0039a12e8d4779947ae72bff4b25bce62b4a69f504b852f52ac295d5a2,PodSandboxId:b63f4ee24680ba0e7d053ef95e1e566e12af314391f351f41255197696918189,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716213394270367335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,},Annotations:map[string]string{io.kubernetes.
container.hash: 7423fb5c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724a0f328d82967c2d2ac56e446c78255da4665ff63ffeb7b0c53a553f1a5eea,PodSandboxId:67e21022c7a3415dba1d15b31d0c206eda7eedf04c4524d232a063fa381e188a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716213394268329983,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,},Annotations:map[string]string{io.kubernetes.container.hash:
c125c438,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08459b873db5fd2313db35ebf67b4d487eac4df351de620a38af8b2d730ee07c,PodSandboxId:7a27d81d93d24acfcb1ef9947ee060f6f93aee219bc5565ffc273c247a279099,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716213394199325606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7757cc10-d898-4f44-bcbc-883d1d4e8383 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.797274938Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=3559a25d-b05e-40c6-b237-64590aef5e00 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.797662294Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f77cab11bbedc326759c97360323edea68d34cbe5ac03da2ec3c5c1f43feafc4,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-w8gjh,Uid:a510c35e-ae74-4076-a8ae-12913bb167bc,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716213800222465903,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T14:02:46.125721450Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2c7c485c8e9047928b532493574043ec610fbc5b67a00c0f500d5b8a7b9195aa,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-2vnnq,Uid:8e815096-de18-40b2-af12-e6cbc2faf393,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1716213766543354665,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T14:02:46.125731419Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e11526823350804b06706aa9ef92e5212c4b01bfd21f0fee70b1424c6057fc0a,Metadata:&PodSandboxMetadata{Name:kindnet-cthl4,Uid:bd51aead-83ce-49c7-a860-e88ae9e25ff1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716213766463419250,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map
[string]string{kubernetes.io/config.seen: 2024-05-20T14:02:46.125732685Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e877889b86a8f9ff3e23838b21889a311bf9e8fe40aa6b7d8d325fae4449ea40,Metadata:&PodSandboxMetadata{Name:kube-proxy-c5jv4,Uid:0102751c-4388-4e4d-80ed-3115f4ae124d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716213766459197691,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed-3115f4ae124d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T14:02:46.125728831Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:52ff335575c31b24def77852dcaed5194fa475f155e6d03b761c5604fe76ca0b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e16b9968-0b37-4750-bf40-91d6bcf8dd47,Namespace:kube-system,Attempt:1,},State
:SANDBOX_READY,CreatedAt:1716213766455189260,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp
\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-20T14:02:46.125736018Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:995ff54780bf97e426d8e6a022d196bdf501017a8d001fbce488f37e4d44fe88,Metadata:&PodSandboxMetadata{Name:etcd-multinode-114485,Uid:bed4fe5c25fd655cbfa4151a2ba98b62,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716213762616107913,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.141:2379,kubernetes.io/config.hash: bed4fe5c25fd655cbfa4151a2ba98b62,kubernetes.io/config.seen: 2024-05-20T14:02:42.129022976Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1519ac6f1d6decbe68b1ba789182ffdcf51589bb058b0f4ad596b894c06ce5b6,Metada
ta:&PodSandboxMetadata{Name:kube-controller-manager-multinode-114485,Uid:9b51848b2826d5f2f54daa8c7e926d44,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716213762612427585,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9b51848b2826d5f2f54daa8c7e926d44,kubernetes.io/config.seen: 2024-05-20T14:02:42.129028297Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e8eb7950cef0cb0f3b440962fa55845ffe4199d9e308f13b03b77d8b892baab0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-114485,Uid:97322539ef6d03f7d0713524ed65bd38,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716213762611479020,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io
.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 97322539ef6d03f7d0713524ed65bd38,kubernetes.io/config.seen: 2024-05-20T14:02:42.129029200Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6be4e58e0ac0ab92c7c2aa004e6cb1a62a6ba003171818845b731b483ff3db95,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-114485,Uid:0ad67bf1fe7f0e8a7bf69d5729335321,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716213762592433793,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.141:8443,kuberne
tes.io/config.hash: 0ad67bf1fe7f0e8a7bf69d5729335321,kubernetes.io/config.seen: 2024-05-20T14:02:42.129027137Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4b8717ffa777a554895361c3d98768f45d847a5f60a70427644ebadb7ad8e183,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-w8gjh,Uid:a510c35e-ae74-4076-a8ae-12913bb167bc,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716213460278074951,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T13:57:39.969238675Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b5fc4aba67a0c2a17c5fb357f0dea7c3d48c2256f81ef3528bc056471785bdb5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e16b9968-0b37-4750-bf40-91d6bcf8dd47,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1716213416258834157,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-20T13:56:55.953307501Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0a83d9634f503160df44dbfec482c89bc010bc1fba09e4f243dbb0734139ff65,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-2vnnq,Uid:8e815096-de18-40b2-af12-e6cbc2faf393,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716213416253921368,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T13:56:55.941268885Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7cab01574fea9f8306f5099eca8737a6b7511280f1111c11f4e58c7159f6c7d0,Metadata:&PodSandboxMetadata{Name:kindnet-cthl4,Uid:bd51aead-83ce-49c7-a860-e88ae9e25ff1,Namespace:kube-sys
tem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716213413494250433,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T13:56:53.139964886Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a10d72941c5420edc8219ac630212eaf4276d7815ff265ce0015268ace796849,Metadata:&PodSandboxMetadata{Name:kube-proxy-c5jv4,Uid:0102751c-4388-4e4d-80ed-3115f4ae124d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716213413482735028,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed-3115f4ae124d,k8s-app: kub
e-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T13:56:53.140156677Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:67e21022c7a3415dba1d15b31d0c206eda7eedf04c4524d232a063fa381e188a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-114485,Uid:0ad67bf1fe7f0e8a7bf69d5729335321,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716213394042832169,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.141:8443,kubernetes.io/config.hash: 0ad67bf1fe7f0e8a7bf69d5729335321,kubernetes.io/config.seen: 2024-05-20T13:56:33.580447209Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7a27d81d93d24a
cfcb1ef9947ee060f6f93aee219bc5565ffc273c247a279099,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-114485,Uid:9b51848b2826d5f2f54daa8c7e926d44,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716213394040056616,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9b51848b2826d5f2f54daa8c7e926d44,kubernetes.io/config.seen: 2024-05-20T13:56:33.580449312Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:399f171341d410bf6c0e8abc16be9a017b0059f237940969ee1ebf4782d5c399,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-114485,Uid:97322539ef6d03f7d0713524ed65bd38,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716213394035962754,Labels:map[string]string
{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 97322539ef6d03f7d0713524ed65bd38,kubernetes.io/config.seen: 2024-05-20T13:56:33.580450686Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b63f4ee24680ba0e7d053ef95e1e566e12af314391f351f41255197696918189,Metadata:&PodSandboxMetadata{Name:etcd-multinode-114485,Uid:bed4fe5c25fd655cbfa4151a2ba98b62,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716213394033510595,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https:
//192.168.39.141:2379,kubernetes.io/config.hash: bed4fe5c25fd655cbfa4151a2ba98b62,kubernetes.io/config.seen: 2024-05-20T13:56:33.580441222Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3559a25d-b05e-40c6-b237-64590aef5e00 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.798827710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38192c7a-fb00-46bc-9747-90609f39b826 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.798898379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38192c7a-fb00-46bc-9747-90609f39b826 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:06:29 multinode-114485 crio[2846]: time="2024-05-20 14:06:29.799356335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b5aa659743d06e72e553df5fb166941a4a09164354a9aba9ebd6df158b0a2c0,PodSandboxId:f77cab11bbedc326759c97360323edea68d34cbe5ac03da2ec3c5c1f43feafc4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716213800395288528,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,},Annotations:map[string]string{io.kubernetes.container.hash: 106b706f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df75c1bbda42f260827efa8676d14fa9eddbe9b2816ce1febe4c9dd46115eb19,PodSandboxId:e11526823350804b06706aa9ef92e5212c4b01bfd21f0fee70b1424c6057fc0a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716213766873056977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 5212307c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10846343b933740c1503c452acbbf4b4d1d8fc8079f65473c0596dcba48576f7,PodSandboxId:2c7c485c8e9047928b532493574043ec610fbc5b67a00c0f500d5b8a7b9195aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716213766819036127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,},Annotations:map[string]string{io.kubernetes.container.hash: 1473217a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:222a378af63cfd0b3dd2634d41788c29fe8273a65a7a6c03ad3324256eda986f,PodSandboxId:e877889b86a8f9ff3e23838b21889a311bf9e8fe40aa6b7d8d325fae4449ea40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716213766724036277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed-3115f4ae124d,},Annotations:map[string]
string{io.kubernetes.container.hash: 10056e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8aa664fbcdbdad4bcd080fd54d7101b141b67f9fb50c0a61a0bd40d1cb24878,PodSandboxId:52ff335575c31b24def77852dcaed5194fa475f155e6d03b761c5604fe76ca0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716213766734459135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{io.ku
bernetes.container.hash: d1a502da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552538ae34f5ba25a3d81e16bb92d73f926891c616aba284f99dc8383269059d,PodSandboxId:e8eb7950cef0cb0f3b440962fa55845ffe4199d9e308f13b03b77d8b892baab0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716213762837851891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b47a632823b64cd2a6a6a5667d9594e7369a7385699677aed10c88fcce49c6a1,PodSandboxId:1519ac6f1d6decbe68b1ba789182ffdcf51589bb058b0f4ad596b894c06ce5b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716213762839873064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,},Annotations:map[string]string{io.kub
ernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ba8ed72509ec2cf2738715a3ae02d183f652a0fdf046452899b1ca36c8d1519,PodSandboxId:995ff54780bf97e426d8e6a022d196bdf501017a8d001fbce488f37e4d44fe88,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716213762787837814,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,},Annotations:map[string]string{io.kubernetes.container.hash: 7423fb5c,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0950bfbde431e07391bb72043d6a6f3e1141698eea80fe52436c452f26a03cd,PodSandboxId:6be4e58e0ac0ab92c7c2aa004e6cb1a62a6ba003171818845b731b483ff3db95,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716213762748483252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,},Annotations:map[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a65b004f5928e7c57c08a060d3b21ebfa7b51556b043dfb32d65be7c9f2ad0b,PodSandboxId:4b8717ffa777a554895361c3d98768f45d847a5f60a70427644ebadb7ad8e183,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716213462731382845,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8gjh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a510c35e-ae74-4076-a8ae-12913bb167bc,},Annotations:map[string]string{io.kubernetes.container.hash: 106b706f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ff5148b0c6ef582137978525a24e4eb722c2f50fe7c0bce9bc4577c82911ed7,PodSandboxId:0a83d9634f503160df44dbfec482c89bc010bc1fba09e4f243dbb0734139ff65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716213416445134004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2vnnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e815096-de18-40b2-af12-e6cbc2faf393,},Annotations:map[string]string{io.kubernetes.container.hash: 1473217a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d4b37910552f514029a560da97b4d30702e290efd8222ae7d93f001e8fc6cd,PodSandboxId:b5fc4aba67a0c2a17c5fb357f0dea7c3d48c2256f81ef3528bc056471785bdb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716213416357973958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e16b9968-0b37-4750-bf40-91d6bcf8dd47,},Annotations:map[string]string{io.kubernetes.container.hash: d1a502da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40573632694f53397011e5740b786034914eacd4130a49af46d4dd3552733834,PodSandboxId:7cab01574fea9f8306f5099eca8737a6b7511280f1111c11f4e58c7159f6c7d0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716213414025508308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cthl4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: bd51aead-83ce-49c7-a860-e88ae9e25ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 5212307c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402715f11c169f573b8dbc6e0a4941404dcb6550b6dcfd79e497161a808b6c3a,PodSandboxId:a10d72941c5420edc8219ac630212eaf4276d7815ff265ce0015268ace796849,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716213413772333796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5jv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0102751c-4388-4e4d-80ed
-3115f4ae124d,},Annotations:map[string]string{io.kubernetes.container.hash: 10056e2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b880898a006546d6c8d22ee52fe792bbba8137e189a7b8d2ed95c35965a308be,PodSandboxId:399f171341d410bf6c0e8abc16be9a017b0059f237940969ee1ebf4782d5c399,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716213394300306427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97322539ef6d03f7d0713524ed65bd38,}
,Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68b22a0039a12e8d4779947ae72bff4b25bce62b4a69f504b852f52ac295d5a2,PodSandboxId:b63f4ee24680ba0e7d053ef95e1e566e12af314391f351f41255197696918189,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716213394270367335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed4fe5c25fd655cbfa4151a2ba98b62,},Annotations:map[string]string{io.kubernetes.
container.hash: 7423fb5c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724a0f328d82967c2d2ac56e446c78255da4665ff63ffeb7b0c53a553f1a5eea,PodSandboxId:67e21022c7a3415dba1d15b31d0c206eda7eedf04c4524d232a063fa381e188a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716213394268329983,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad67bf1fe7f0e8a7bf69d5729335321,},Annotations:map[string]string{io.kubernetes.container.hash:
c125c438,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08459b873db5fd2313db35ebf67b4d487eac4df351de620a38af8b2d730ee07c,PodSandboxId:7a27d81d93d24acfcb1ef9947ee060f6f93aee219bc5565ffc273c247a279099,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716213394199325606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-114485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b51848b2826d5f2f54daa8c7e926d44,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38192c7a-fb00-46bc-9747-90609f39b826 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5b5aa659743d0       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   f77cab11bbedc       busybox-fc5497c4f-w8gjh
	df75c1bbda42f       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   e115268233508       kindnet-cthl4
	10846343b9337       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   2c7c485c8e904       coredns-7db6d8ff4d-2vnnq
	a8aa664fbcdbd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   52ff335575c31       storage-provisioner
	222a378af63cf       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      3 minutes ago       Running             kube-proxy                1                   e877889b86a8f       kube-proxy-c5jv4
	b47a632823b64       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      3 minutes ago       Running             kube-controller-manager   1                   1519ac6f1d6de       kube-controller-manager-multinode-114485
	552538ae34f5b       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      3 minutes ago       Running             kube-scheduler            1                   e8eb7950cef0c       kube-scheduler-multinode-114485
	3ba8ed72509ec       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   995ff54780bf9       etcd-multinode-114485
	a0950bfbde431       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      3 minutes ago       Running             kube-apiserver            1                   6be4e58e0ac0a       kube-apiserver-multinode-114485
	7a65b004f5928       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   4b8717ffa777a       busybox-fc5497c4f-w8gjh
	1ff5148b0c6ef       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   0a83d9634f503       coredns-7db6d8ff4d-2vnnq
	a6d4b37910552       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   b5fc4aba67a0c       storage-provisioner
	40573632694f5       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      9 minutes ago       Exited              kindnet-cni               0                   7cab01574fea9       kindnet-cthl4
	402715f11c169       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      9 minutes ago       Exited              kube-proxy                0                   a10d72941c542       kube-proxy-c5jv4
	b880898a00654       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      9 minutes ago       Exited              kube-scheduler            0                   399f171341d41       kube-scheduler-multinode-114485
	68b22a0039a12       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Exited              etcd                      0                   b63f4ee24680b       etcd-multinode-114485
	724a0f328d829       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      9 minutes ago       Exited              kube-apiserver            0                   67e21022c7a34       kube-apiserver-multinode-114485
	08459b873db5f       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      9 minutes ago       Exited              kube-controller-manager   0                   7a27d81d93d24       kube-controller-manager-multinode-114485
	
	
	==> coredns [10846343b933740c1503c452acbbf4b4d1d8fc8079f65473c0596dcba48576f7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53624 - 32218 "HINFO IN 304773559447969083.7830795655233602673. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00893932s
	
	
	==> coredns [1ff5148b0c6ef582137978525a24e4eb722c2f50fe7c0bce9bc4577c82911ed7] <==
	[INFO] 10.244.0.3:39109 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001770291s
	[INFO] 10.244.0.3:58040 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104751s
	[INFO] 10.244.0.3:40242 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142644s
	[INFO] 10.244.0.3:59131 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001361565s
	[INFO] 10.244.0.3:40719 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094422s
	[INFO] 10.244.0.3:56913 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112225s
	[INFO] 10.244.0.3:49638 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.001034326s
	[INFO] 10.244.1.2:50916 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000293699s
	[INFO] 10.244.1.2:37156 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000203471s
	[INFO] 10.244.1.2:42754 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171979s
	[INFO] 10.244.1.2:35018 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083128s
	[INFO] 10.244.0.3:43392 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194206s
	[INFO] 10.244.0.3:42750 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075694s
	[INFO] 10.244.0.3:36250 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200931s
	[INFO] 10.244.0.3:53362 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154831s
	[INFO] 10.244.1.2:55106 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106323s
	[INFO] 10.244.1.2:45726 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000357128s
	[INFO] 10.244.1.2:34646 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114132s
	[INFO] 10.244.1.2:38586 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000200709s
	[INFO] 10.244.0.3:45806 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083628s
	[INFO] 10.244.0.3:43445 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000038864s
	[INFO] 10.244.0.3:50558 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00003753s
	[INFO] 10.244.0.3:47130 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000035206s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-114485
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-114485
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=multinode-114485
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T13_56_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:56:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-114485
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 14:06:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 14:02:45 +0000   Mon, 20 May 2024 13:56:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 14:02:45 +0000   Mon, 20 May 2024 13:56:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 14:02:45 +0000   Mon, 20 May 2024 13:56:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 14:02:45 +0000   Mon, 20 May 2024 13:56:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.141
	  Hostname:    multinode-114485
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7eaa9386e4541a9b98eb4fedef56182
	  System UUID:                f7eaa938-6e45-41a9-b98e-b4fedef56182
	  Boot ID:                    da877314-8b45-4837-8c5b-bf338c249bde
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-w8gjh                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m51s
	  kube-system                 coredns-7db6d8ff4d-2vnnq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m37s
	  kube-system                 etcd-multinode-114485                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m51s
	  kube-system                 kindnet-cthl4                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m37s
	  kube-system                 kube-apiserver-multinode-114485             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m51s
	  kube-system                 kube-controller-manager-multinode-114485    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m51s
	  kube-system                 kube-proxy-c5jv4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m37s
	  kube-system                 kube-scheduler-multinode-114485             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m51s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m35s                  kube-proxy       
	  Normal  Starting                 3m42s                  kube-proxy       
	  Normal  NodeHasSufficientPID     9m51s                  kubelet          Node multinode-114485 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m51s                  kubelet          Node multinode-114485 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m51s                  kubelet          Node multinode-114485 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 9m51s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m38s                  node-controller  Node multinode-114485 event: Registered Node multinode-114485 in Controller
	  Normal  NodeReady                9m35s                  kubelet          Node multinode-114485 status is now: NodeReady
	  Normal  Starting                 3m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m48s (x8 over 3m48s)  kubelet          Node multinode-114485 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s (x8 over 3m48s)  kubelet          Node multinode-114485 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s (x7 over 3m48s)  kubelet          Node multinode-114485 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m32s                  node-controller  Node multinode-114485 event: Registered Node multinode-114485 in Controller
	
	
	Name:               multinode-114485-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-114485-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=multinode-114485
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T14_03_25_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 14:03:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-114485-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 14:04:06 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 20 May 2024 14:03:55 +0000   Mon, 20 May 2024 14:04:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 20 May 2024 14:03:55 +0000   Mon, 20 May 2024 14:04:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 20 May 2024 14:03:55 +0000   Mon, 20 May 2024 14:04:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 20 May 2024 14:03:55 +0000   Mon, 20 May 2024 14:04:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    multinode-114485-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a746118cadd34140a015ce06209237c3
	  System UUID:                a746118c-add3-4140-a015-ce06209237c3
	  Boot ID:                    fc6c1200-4ab5-49fe-a2de-7f6362203f15
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bcfmm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 kindnet-xcxtk              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m3s
	  kube-system                 kube-proxy-6w2qv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m                   kube-proxy       
	  Normal  Starting                 8m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  9m3s (x2 over 9m4s)  kubelet          Node multinode-114485-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m3s (x2 over 9m4s)  kubelet          Node multinode-114485-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m3s (x2 over 9m4s)  kubelet          Node multinode-114485-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m53s                kubelet          Node multinode-114485-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m6s (x2 over 3m6s)  kubelet          Node multinode-114485-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x2 over 3m6s)  kubelet          Node multinode-114485-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x2 over 3m6s)  kubelet          Node multinode-114485-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m56s                kubelet          Node multinode-114485-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                 node-controller  Node multinode-114485-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +6.329412] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.059863] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061281] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.174792] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.138775] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.266440] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +3.964781] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.090345] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.061845] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.978278] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.074685] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.824866] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.360732] systemd-fstab-generator[1471]: Ignoring "noauto" option for root device
	[May20 13:57] kauditd_printk_skb: 84 callbacks suppressed
	[May20 14:02] systemd-fstab-generator[2759]: Ignoring "noauto" option for root device
	[  +0.137355] systemd-fstab-generator[2772]: Ignoring "noauto" option for root device
	[  +0.178565] systemd-fstab-generator[2786]: Ignoring "noauto" option for root device
	[  +0.140879] systemd-fstab-generator[2798]: Ignoring "noauto" option for root device
	[  +0.268383] systemd-fstab-generator[2826]: Ignoring "noauto" option for root device
	[  +1.969370] systemd-fstab-generator[2930]: Ignoring "noauto" option for root device
	[  +1.830679] systemd-fstab-generator[3052]: Ignoring "noauto" option for root device
	[  +0.734564] kauditd_printk_skb: 144 callbacks suppressed
	[ +16.143280] kauditd_printk_skb: 72 callbacks suppressed
	[May20 14:03] systemd-fstab-generator[3889]: Ignoring "noauto" option for root device
	[ +20.025623] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [3ba8ed72509ec2cf2738715a3ae02d183f652a0fdf046452899b1ca36c8d1519] <==
	{"level":"info","ts":"2024-05-20T14:02:43.207245Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T14:02:43.20732Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T14:02:43.208354Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T14:02:43.211541Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2398e045949c73cb","initial-advertise-peer-urls":["https://192.168.39.141:2380"],"listen-peer-urls":["https://192.168.39.141:2380"],"advertise-client-urls":["https://192.168.39.141:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.141:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T14:02:43.212574Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T14:02:43.211201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb switched to configuration voters=(2565046577238143947)"}
	{"level":"info","ts":"2024-05-20T14:02:43.213509Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bf8381628c3e4cea","local-member-id":"2398e045949c73cb","added-peer-id":"2398e045949c73cb","added-peer-peer-urls":["https://192.168.39.141:2380"]}
	{"level":"info","ts":"2024-05-20T14:02:43.211332Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2024-05-20T14:02:43.216093Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2024-05-20T14:02:43.216326Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bf8381628c3e4cea","local-member-id":"2398e045949c73cb","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T14:02:43.216607Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T14:02:44.259142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T14:02:44.259194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T14:02:44.259224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb received MsgPreVoteResp from 2398e045949c73cb at term 2"}
	{"level":"info","ts":"2024-05-20T14:02:44.259235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T14:02:44.259241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb received MsgVoteResp from 2398e045949c73cb at term 3"}
	{"level":"info","ts":"2024-05-20T14:02:44.259256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became leader at term 3"}
	{"level":"info","ts":"2024-05-20T14:02:44.259279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2398e045949c73cb elected leader 2398e045949c73cb at term 3"}
	{"level":"info","ts":"2024-05-20T14:02:44.26659Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2398e045949c73cb","local-member-attributes":"{Name:multinode-114485 ClientURLs:[https://192.168.39.141:2379]}","request-path":"/0/members/2398e045949c73cb/attributes","cluster-id":"bf8381628c3e4cea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T14:02:44.266857Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T14:02:44.267899Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T14:02:44.26889Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T14:02:44.268923Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T14:02:44.268924Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T14:02:44.272596Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.141:2379"}
	
	
	==> etcd [68b22a0039a12e8d4779947ae72bff4b25bce62b4a69f504b852f52ac295d5a2] <==
	{"level":"info","ts":"2024-05-20T13:57:27.222093Z","caller":"traceutil/trace.go:171","msg":"trace[486503619] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"263.948158ms","start":"2024-05-20T13:57:26.958128Z","end":"2024-05-20T13:57:27.222076Z","steps":["trace[486503619] 'process raft request'  (duration: 66.845703ms)","trace[486503619] 'compare'  (duration: 195.957062ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T13:57:27.222242Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.615888ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T13:57:27.222288Z","caller":"traceutil/trace.go:171","msg":"trace[46184745] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:453; }","duration":"169.669219ms","start":"2024-05-20T13:57:27.052607Z","end":"2024-05-20T13:57:27.222276Z","steps":["trace[46184745] 'agreement among raft nodes before linearized reading'  (duration: 169.582858ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T13:58:11.290597Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.104934ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8343920610286423031 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-bcfrb\" mod_revision:576 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-bcfrb\" value_size:1268 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-bcfrb\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-05-20T13:58:11.290966Z","caller":"traceutil/trace.go:171","msg":"trace[1657015360] linearizableReadLoop","detail":"{readStateIndex:613; appliedIndex:612; }","duration":"236.862655ms","start":"2024-05-20T13:58:11.054073Z","end":"2024-05-20T13:58:11.290936Z","steps":["trace[1657015360] 'read index received'  (duration: 97.002572ms)","trace[1657015360] 'applied index is now lower than readState.Index'  (duration: 139.85839ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T13:58:11.291378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.203364ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T13:58:11.291455Z","caller":"traceutil/trace.go:171","msg":"trace[1003733973] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:578; }","duration":"237.394051ms","start":"2024-05-20T13:58:11.05405Z","end":"2024-05-20T13:58:11.291444Z","steps":["trace[1003733973] 'agreement among raft nodes before linearized reading'  (duration: 237.086596ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T13:58:11.291553Z","caller":"traceutil/trace.go:171","msg":"trace[857176718] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"252.499548ms","start":"2024-05-20T13:58:11.039033Z","end":"2024-05-20T13:58:11.291532Z","steps":["trace[857176718] 'process raft request'  (duration: 112.08375ms)","trace[857176718] 'compare'  (duration: 138.96424ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T13:58:18.564587Z","caller":"traceutil/trace.go:171","msg":"trace[197123083] transaction","detail":"{read_only:false; response_revision:619; number_of_response:1; }","duration":"121.737931ms","start":"2024-05-20T13:58:18.442815Z","end":"2024-05-20T13:58:18.564553Z","steps":["trace[197123083] 'process raft request'  (duration: 121.477447ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T13:58:19.016428Z","caller":"traceutil/trace.go:171","msg":"trace[998706925] linearizableReadLoop","detail":"{readStateIndex:659; appliedIndex:658; }","duration":"105.368948ms","start":"2024-05-20T13:58:18.911042Z","end":"2024-05-20T13:58:19.016411Z","steps":["trace[998706925] 'read index received'  (duration: 24.60621ms)","trace[998706925] 'applied index is now lower than readState.Index'  (duration: 80.762083ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T13:58:19.016585Z","caller":"traceutil/trace.go:171","msg":"trace[336241855] transaction","detail":"{read_only:false; response_revision:620; number_of_response:1; }","duration":"145.409896ms","start":"2024-05-20T13:58:18.871168Z","end":"2024-05-20T13:58:19.016578Z","steps":["trace[336241855] 'process raft request'  (duration: 64.522791ms)","trace[336241855] 'compare'  (duration: 80.641083ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T13:58:19.016449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.741691ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.141\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-05-20T13:58:19.016671Z","caller":"traceutil/trace.go:171","msg":"trace[205568241] range","detail":"{range_begin:/registry/masterleases/192.168.39.141; range_end:; response_count:1; response_revision:619; }","duration":"255.006766ms","start":"2024-05-20T13:58:18.761652Z","end":"2024-05-20T13:58:19.016659Z","steps":["trace[205568241] 'range keys from in-memory index tree'  (duration: 254.551981ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T13:58:19.017298Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.244166ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T13:58:19.017995Z","caller":"traceutil/trace.go:171","msg":"trace[676301740] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:620; }","duration":"106.94281ms","start":"2024-05-20T13:58:18.911018Z","end":"2024-05-20T13:58:19.017961Z","steps":["trace[676301740] 'agreement among raft nodes before linearized reading'  (duration: 106.252638ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T14:01:06.15863Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-20T14:01:06.158743Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-114485","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.141:2380"],"advertise-client-urls":["https://192.168.39.141:2379"]}
	{"level":"warn","ts":"2024-05-20T14:01:06.158887Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T14:01:06.158989Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T14:01:06.212994Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.141:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T14:01:06.213083Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.141:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-20T14:01:06.213155Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2398e045949c73cb","current-leader-member-id":"2398e045949c73cb"}
	{"level":"info","ts":"2024-05-20T14:01:06.21935Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2024-05-20T14:01:06.219461Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2024-05-20T14:01:06.219482Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-114485","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.141:2380"],"advertise-client-urls":["https://192.168.39.141:2379"]}
	
	
	==> kernel <==
	 14:06:30 up 10 min,  0 users,  load average: 0.13, 0.29, 0.21
	Linux multinode-114485 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [40573632694f53397011e5740b786034914eacd4130a49af46d4dd3552733834] <==
	I0520 14:00:25.817562       1 main.go:250] Node multinode-114485-m03 has CIDR [10.244.3.0/24] 
	I0520 14:00:35.828869       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:00:35.828962       1 main.go:227] handling current node
	I0520 14:00:35.828987       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:00:35.829005       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:00:35.829139       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I0520 14:00:35.829177       1 main.go:250] Node multinode-114485-m03 has CIDR [10.244.3.0/24] 
	I0520 14:00:45.836237       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:00:45.838748       1 main.go:227] handling current node
	I0520 14:00:45.838871       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:00:45.838909       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:00:45.839100       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I0520 14:00:45.839151       1 main.go:250] Node multinode-114485-m03 has CIDR [10.244.3.0/24] 
	I0520 14:00:55.844461       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:00:55.844506       1 main.go:227] handling current node
	I0520 14:00:55.844535       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:00:55.844543       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:00:55.844713       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I0520 14:00:55.844738       1 main.go:250] Node multinode-114485-m03 has CIDR [10.244.3.0/24] 
	I0520 14:01:05.856089       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:01:05.856120       1 main.go:227] handling current node
	I0520 14:01:05.856130       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:01:05.856136       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:01:05.856233       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I0520 14:01:05.856251       1 main.go:250] Node multinode-114485-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [df75c1bbda42f260827efa8676d14fa9eddbe9b2816ce1febe4c9dd46115eb19] <==
	I0520 14:05:27.798879       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:05:37.807432       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:05:37.807656       1 main.go:227] handling current node
	I0520 14:05:37.807701       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:05:37.807756       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:05:47.811575       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:05:47.811617       1 main.go:227] handling current node
	I0520 14:05:47.811627       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:05:47.811633       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:05:57.816610       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:05:57.816649       1 main.go:227] handling current node
	I0520 14:05:57.816659       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:05:57.816665       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:06:07.826593       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:06:07.826707       1 main.go:227] handling current node
	I0520 14:06:07.826736       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:06:07.826756       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:06:17.840259       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:06:17.840385       1 main.go:227] handling current node
	I0520 14:06:17.840409       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:06:17.840428       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	I0520 14:06:27.851582       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0520 14:06:27.852237       1 main.go:227] handling current node
	I0520 14:06:27.852291       1 main.go:223] Handling node with IPs: map[192.168.39.55:{}]
	I0520 14:06:27.852321       1 main.go:250] Node multinode-114485-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [724a0f328d82967c2d2ac56e446c78255da4665ff63ffeb7b0c53a553f1a5eea] <==
	W0520 14:01:06.162680       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.162699       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.171405       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.172643       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.172727       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.172961       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173141       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173228       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173278       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173336       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173361       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173409       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173456       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173499       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173526       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173580       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173632       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173678       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173729       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173815       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173863       1 logging.go:59] [core] [Channel #9 SubChannel #10] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173909       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173340       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173149       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:01:06.173504       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a0950bfbde431e07391bb72043d6a6f3e1141698eea80fe52436c452f26a03cd] <==
	I0520 14:02:45.643388       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 14:02:45.655644       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 14:02:45.671973       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 14:02:45.674898       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 14:02:45.674984       1 policy_source.go:224] refreshing policies
	I0520 14:02:45.699272       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 14:02:45.699750       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 14:02:45.700259       1 aggregator.go:165] initial CRD sync complete...
	I0520 14:02:45.700304       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 14:02:45.700328       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 14:02:45.700352       1 cache.go:39] Caches are synced for autoregister controller
	I0520 14:02:45.700354       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0520 14:02:45.710174       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0520 14:02:45.710597       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 14:02:45.710616       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 14:02:45.710716       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 14:02:45.711634       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 14:02:46.505686       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 14:02:47.711340       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 14:02:47.832215       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 14:02:47.844629       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 14:02:47.957511       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 14:02:47.977144       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 14:02:58.787000       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 14:02:58.789334       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [08459b873db5fd2313db35ebf67b4d487eac4df351de620a38af8b2d730ee07c] <==
	I0520 13:57:27.227568       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-114485-m02\" does not exist"
	I0520 13:57:27.237994       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-114485-m02" podCIDRs=["10.244.1.0/24"]
	I0520 13:57:27.264084       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-114485-m02"
	I0520 13:57:37.630344       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 13:57:39.979675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.490314ms"
	I0520 13:57:39.994897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.140026ms"
	I0520 13:57:40.009546       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.597496ms"
	I0520 13:57:40.009628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.411µs"
	I0520 13:57:43.109439       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.344769ms"
	I0520 13:57:43.110975       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.886µs"
	I0520 13:57:43.796414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.233683ms"
	I0520 13:57:43.797324       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.637µs"
	I0520 13:58:11.344184       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-114485-m03\" does not exist"
	I0520 13:58:11.345452       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 13:58:11.359247       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-114485-m03" podCIDRs=["10.244.2.0/24"]
	I0520 13:58:12.284289       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-114485-m03"
	I0520 13:58:21.896612       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 13:58:50.149100       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 13:58:51.688969       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-114485-m03\" does not exist"
	I0520 13:58:51.689527       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 13:58:51.696376       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-114485-m03" podCIDRs=["10.244.3.0/24"]
	I0520 13:59:00.450899       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 13:59:42.341121       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 13:59:42.404865       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.745822ms"
	I0520 13:59:42.405022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.016µs"
	
	
	==> kube-controller-manager [b47a632823b64cd2a6a6a5667d9594e7369a7385699677aed10c88fcce49c6a1] <==
	I0520 14:03:24.767673       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-114485-m02" podCIDRs=["10.244.1.0/24"]
	I0520 14:03:26.654037       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.392µs"
	I0520 14:03:26.702382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.215µs"
	I0520 14:03:26.712952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.386µs"
	I0520 14:03:26.718268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.942µs"
	I0520 14:03:26.726893       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.896µs"
	I0520 14:03:26.731056       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.892µs"
	I0520 14:03:29.049822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.759µs"
	I0520 14:03:34.151038       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 14:03:34.176738       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.606µs"
	I0520 14:03:34.197898       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.093µs"
	I0520 14:03:37.164106       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.08319ms"
	I0520 14:03:37.164227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.17µs"
	I0520 14:03:52.430672       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 14:03:53.638451       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-114485-m03\" does not exist"
	I0520 14:03:53.638615       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 14:03:53.651846       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-114485-m03" podCIDRs=["10.244.2.0/24"]
	I0520 14:04:02.704103       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 14:04:08.312756       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-114485-m02"
	I0520 14:04:48.959011       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.525687ms"
	I0520 14:04:48.959501       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.493µs"
	I0520 14:05:18.804549       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8hz6f"
	I0520 14:05:18.834967       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8hz6f"
	I0520 14:05:18.835053       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-6fkdn"
	I0520 14:05:18.867104       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-6fkdn"
	
	
	==> kube-proxy [222a378af63cfd0b3dd2634d41788c29fe8273a65a7a6c03ad3324256eda986f] <==
	I0520 14:02:47.066219       1 server_linux.go:69] "Using iptables proxy"
	I0520 14:02:47.083590       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.141"]
	I0520 14:02:47.147688       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 14:02:47.147743       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 14:02:47.147764       1 server_linux.go:165] "Using iptables Proxier"
	I0520 14:02:47.150276       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 14:02:47.150457       1 server.go:872] "Version info" version="v1.30.1"
	I0520 14:02:47.150487       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 14:02:47.152031       1 config.go:192] "Starting service config controller"
	I0520 14:02:47.152067       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 14:02:47.152093       1 config.go:101] "Starting endpoint slice config controller"
	I0520 14:02:47.152108       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 14:02:47.152642       1 config.go:319] "Starting node config controller"
	I0520 14:02:47.152672       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 14:02:47.252739       1 shared_informer.go:320] Caches are synced for node config
	I0520 14:02:47.252766       1 shared_informer.go:320] Caches are synced for service config
	I0520 14:02:47.252844       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [402715f11c169f573b8dbc6e0a4941404dcb6550b6dcfd79e497161a808b6c3a] <==
	I0520 13:56:54.255059       1 server_linux.go:69] "Using iptables proxy"
	I0520 13:56:54.292715       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.141"]
	I0520 13:56:54.403699       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 13:56:54.403748       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 13:56:54.403818       1 server_linux.go:165] "Using iptables Proxier"
	I0520 13:56:54.406057       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 13:56:54.406269       1 server.go:872] "Version info" version="v1.30.1"
	I0520 13:56:54.406294       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:56:54.409602       1 config.go:192] "Starting service config controller"
	I0520 13:56:54.409632       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 13:56:54.409651       1 config.go:101] "Starting endpoint slice config controller"
	I0520 13:56:54.409655       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 13:56:54.410232       1 config.go:319] "Starting node config controller"
	I0520 13:56:54.410238       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 13:56:54.509763       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 13:56:54.509955       1 shared_informer.go:320] Caches are synced for service config
	I0520 13:56:54.510727       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [552538ae34f5ba25a3d81e16bb92d73f926891c616aba284f99dc8383269059d] <==
	I0520 14:02:43.763062       1 serving.go:380] Generated self-signed cert in-memory
	W0520 14:02:45.584678       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 14:02:45.584761       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 14:02:45.584820       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 14:02:45.584831       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 14:02:45.646589       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 14:02:45.646670       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 14:02:45.656614       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0520 14:02:45.656743       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 14:02:45.656808       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 14:02:45.656835       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 14:02:45.757816       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b880898a006546d6c8d22ee52fe792bbba8137e189a7b8d2ed95c35965a308be] <==
	W0520 13:56:37.976870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 13:56:37.976999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 13:56:38.007470       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 13:56:38.007860       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 13:56:38.026048       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 13:56:38.026175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 13:56:38.110873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 13:56:38.110975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 13:56:38.125253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 13:56:38.125341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 13:56:38.130266       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 13:56:38.130349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 13:56:38.139982       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 13:56:38.140085       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 13:56:38.219663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 13:56:38.219932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 13:56:38.318455       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 13:56:38.318579       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 13:56:38.345634       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 13:56:38.345663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0520 13:56:40.602184       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 14:01:06.157663       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0520 14:01:06.157834       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0520 14:01:06.158106       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0520 14:01:06.174409       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.128560    3059 topology_manager.go:215] "Topology Admit Handler" podUID="bd51aead-83ce-49c7-a860-e88ae9e25ff1" podNamespace="kube-system" podName="kindnet-cthl4"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.128916    3059 topology_manager.go:215] "Topology Admit Handler" podUID="e16b9968-0b37-4750-bf40-91d6bcf8dd47" podNamespace="kube-system" podName="storage-provisioner"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.129020    3059 topology_manager.go:215] "Topology Admit Handler" podUID="a510c35e-ae74-4076-a8ae-12913bb167bc" podNamespace="default" podName="busybox-fc5497c4f-w8gjh"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.138393    3059 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.192730    3059 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd51aead-83ce-49c7-a860-e88ae9e25ff1-xtables-lock\") pod \"kindnet-cthl4\" (UID: \"bd51aead-83ce-49c7-a860-e88ae9e25ff1\") " pod="kube-system/kindnet-cthl4"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.192952    3059 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e16b9968-0b37-4750-bf40-91d6bcf8dd47-tmp\") pod \"storage-provisioner\" (UID: \"e16b9968-0b37-4750-bf40-91d6bcf8dd47\") " pod="kube-system/storage-provisioner"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.193033    3059 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0102751c-4388-4e4d-80ed-3115f4ae124d-xtables-lock\") pod \"kube-proxy-c5jv4\" (UID: \"0102751c-4388-4e4d-80ed-3115f4ae124d\") " pod="kube-system/kube-proxy-c5jv4"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.193100    3059 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd51aead-83ce-49c7-a860-e88ae9e25ff1-lib-modules\") pod \"kindnet-cthl4\" (UID: \"bd51aead-83ce-49c7-a860-e88ae9e25ff1\") " pod="kube-system/kindnet-cthl4"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.193194    3059 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bd51aead-83ce-49c7-a860-e88ae9e25ff1-cni-cfg\") pod \"kindnet-cthl4\" (UID: \"bd51aead-83ce-49c7-a860-e88ae9e25ff1\") " pod="kube-system/kindnet-cthl4"
	May 20 14:02:46 multinode-114485 kubelet[3059]: I0520 14:02:46.193260    3059 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0102751c-4388-4e4d-80ed-3115f4ae124d-lib-modules\") pod \"kube-proxy-c5jv4\" (UID: \"0102751c-4388-4e4d-80ed-3115f4ae124d\") " pod="kube-system/kube-proxy-c5jv4"
	May 20 14:03:42 multinode-114485 kubelet[3059]: E0520 14:03:42.173236    3059 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 14:03:42 multinode-114485 kubelet[3059]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 14:03:42 multinode-114485 kubelet[3059]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 14:03:42 multinode-114485 kubelet[3059]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 14:03:42 multinode-114485 kubelet[3059]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 14:04:42 multinode-114485 kubelet[3059]: E0520 14:04:42.173276    3059 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 14:04:42 multinode-114485 kubelet[3059]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 14:04:42 multinode-114485 kubelet[3059]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 14:04:42 multinode-114485 kubelet[3059]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 14:04:42 multinode-114485 kubelet[3059]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 14:05:42 multinode-114485 kubelet[3059]: E0520 14:05:42.174597    3059 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 14:05:42 multinode-114485 kubelet[3059]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 14:05:42 multinode-114485 kubelet[3059]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 14:05:42 multinode-114485 kubelet[3059]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 14:05:42 multinode-114485 kubelet[3059]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 14:06:29.383950  643938 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18929-602525/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-114485 -n multinode-114485
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-114485 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.50s)

                                                
                                    
x
+
TestPreload (175.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-051001 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0520 14:11:59.761137  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-051001 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m39.450855576s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-051001 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-051001 image pull gcr.io/k8s-minikube/busybox: (2.331905054s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-051001
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-051001: (6.630263703s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-051001 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0520 14:13:01.808042  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-051001 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m4.22000188s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-051001 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-05-20 14:13:27.328939242 +0000 UTC m=+4753.711093475
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-051001 -n test-preload-051001
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-051001 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-051001 logs -n 25: (1.180526118s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485     | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n multinode-114485 sudo cat                                       | multinode-114485     | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | /home/docker/cp-test_multinode-114485-m03_multinode-114485.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-114485 cp multinode-114485-m03:/home/docker/cp-test.txt                       | multinode-114485     | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m02:/home/docker/cp-test_multinode-114485-m03_multinode-114485-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n                                                                 | multinode-114485     | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | multinode-114485-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-114485 ssh -n multinode-114485-m02 sudo cat                                   | multinode-114485     | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	|         | /home/docker/cp-test_multinode-114485-m03_multinode-114485-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-114485 node stop m03                                                          | multinode-114485     | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:58 UTC |
	| node    | multinode-114485 node start                                                             | multinode-114485     | jenkins | v1.33.1 | 20 May 24 13:58 UTC | 20 May 24 13:59 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-114485                                                                | multinode-114485     | jenkins | v1.33.1 | 20 May 24 13:59 UTC |                     |
	| stop    | -p multinode-114485                                                                     | multinode-114485     | jenkins | v1.33.1 | 20 May 24 13:59 UTC |                     |
	| start   | -p multinode-114485                                                                     | multinode-114485     | jenkins | v1.33.1 | 20 May 24 14:01 UTC | 20 May 24 14:04 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-114485                                                                | multinode-114485     | jenkins | v1.33.1 | 20 May 24 14:04 UTC |                     |
	| node    | multinode-114485 node delete                                                            | multinode-114485     | jenkins | v1.33.1 | 20 May 24 14:04 UTC | 20 May 24 14:04 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-114485 stop                                                                   | multinode-114485     | jenkins | v1.33.1 | 20 May 24 14:04 UTC |                     |
	| start   | -p multinode-114485                                                                     | multinode-114485     | jenkins | v1.33.1 | 20 May 24 14:06 UTC | 20 May 24 14:09 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-114485                                                                | multinode-114485     | jenkins | v1.33.1 | 20 May 24 14:09 UTC |                     |
	| start   | -p multinode-114485-m02                                                                 | multinode-114485-m02 | jenkins | v1.33.1 | 20 May 24 14:09 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-114485-m03                                                                 | multinode-114485-m03 | jenkins | v1.33.1 | 20 May 24 14:09 UTC | 20 May 24 14:10 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-114485                                                                 | multinode-114485     | jenkins | v1.33.1 | 20 May 24 14:10 UTC |                     |
	| delete  | -p multinode-114485-m03                                                                 | multinode-114485-m03 | jenkins | v1.33.1 | 20 May 24 14:10 UTC | 20 May 24 14:10 UTC |
	| delete  | -p multinode-114485                                                                     | multinode-114485     | jenkins | v1.33.1 | 20 May 24 14:10 UTC | 20 May 24 14:10 UTC |
	| start   | -p test-preload-051001                                                                  | test-preload-051001  | jenkins | v1.33.1 | 20 May 24 14:10 UTC | 20 May 24 14:12 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-051001 image pull                                                          | test-preload-051001  | jenkins | v1.33.1 | 20 May 24 14:12 UTC | 20 May 24 14:12 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-051001                                                                  | test-preload-051001  | jenkins | v1.33.1 | 20 May 24 14:12 UTC | 20 May 24 14:12 UTC |
	| start   | -p test-preload-051001                                                                  | test-preload-051001  | jenkins | v1.33.1 | 20 May 24 14:12 UTC | 20 May 24 14:13 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-051001 image list                                                          | test-preload-051001  | jenkins | v1.33.1 | 20 May 24 14:13 UTC | 20 May 24 14:13 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 14:12:22
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 14:12:22.929037  646435 out.go:291] Setting OutFile to fd 1 ...
	I0520 14:12:22.929321  646435 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 14:12:22.929331  646435 out.go:304] Setting ErrFile to fd 2...
	I0520 14:12:22.929335  646435 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 14:12:22.929523  646435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 14:12:22.930058  646435 out.go:298] Setting JSON to false
	I0520 14:12:22.930955  646435 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14083,"bootTime":1716200260,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 14:12:22.931016  646435 start.go:139] virtualization: kvm guest
	I0520 14:12:22.934298  646435 out.go:177] * [test-preload-051001] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 14:12:22.936647  646435 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 14:12:22.936655  646435 notify.go:220] Checking for updates...
	I0520 14:12:22.939117  646435 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 14:12:22.941406  646435 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 14:12:22.943803  646435 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 14:12:22.945992  646435 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 14:12:22.948024  646435 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 14:12:22.950668  646435 config.go:182] Loaded profile config "test-preload-051001": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0520 14:12:22.951381  646435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:12:22.951460  646435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:12:22.967816  646435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37487
	I0520 14:12:22.968234  646435 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:12:22.968908  646435 main.go:141] libmachine: Using API Version  1
	I0520 14:12:22.968930  646435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:12:22.969329  646435 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:12:22.969528  646435 main.go:141] libmachine: (test-preload-051001) Calling .DriverName
	I0520 14:12:22.972266  646435 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 14:12:22.974234  646435 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 14:12:22.974538  646435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:12:22.974571  646435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:12:22.989378  646435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42611
	I0520 14:12:22.989804  646435 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:12:22.990245  646435 main.go:141] libmachine: Using API Version  1
	I0520 14:12:22.990266  646435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:12:22.990589  646435 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:12:22.990772  646435 main.go:141] libmachine: (test-preload-051001) Calling .DriverName
	I0520 14:12:23.027856  646435 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 14:12:23.029979  646435 start.go:297] selected driver: kvm2
	I0520 14:12:23.029995  646435 start.go:901] validating driver "kvm2" against &{Name:test-preload-051001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-051001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:12:23.030089  646435 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 14:12:23.030735  646435 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 14:12:23.030815  646435 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 14:12:23.045879  646435 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 14:12:23.046222  646435 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 14:12:23.046275  646435 cni.go:84] Creating CNI manager for ""
	I0520 14:12:23.046286  646435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 14:12:23.046331  646435 start.go:340] cluster config:
	{Name:test-preload-051001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-051001 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:12:23.046426  646435 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 14:12:23.050521  646435 out.go:177] * Starting "test-preload-051001" primary control-plane node in "test-preload-051001" cluster
	I0520 14:12:23.052560  646435 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0520 14:12:23.398028  646435 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0520 14:12:23.398073  646435 cache.go:56] Caching tarball of preloaded images
	I0520 14:12:23.398215  646435 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0520 14:12:23.401306  646435 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0520 14:12:23.403553  646435 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0520 14:12:23.500330  646435 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0520 14:12:34.329119  646435 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0520 14:12:34.329222  646435 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0520 14:12:35.317972  646435 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0520 14:12:35.318114  646435 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/test-preload-051001/config.json ...
	I0520 14:12:35.318358  646435 start.go:360] acquireMachinesLock for test-preload-051001: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 14:12:35.318424  646435 start.go:364] duration metric: took 42.917µs to acquireMachinesLock for "test-preload-051001"
	I0520 14:12:35.318442  646435 start.go:96] Skipping create...Using existing machine configuration
	I0520 14:12:35.318447  646435 fix.go:54] fixHost starting: 
	I0520 14:12:35.318803  646435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:12:35.318839  646435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:12:35.333703  646435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35535
	I0520 14:12:35.334222  646435 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:12:35.334771  646435 main.go:141] libmachine: Using API Version  1
	I0520 14:12:35.334796  646435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:12:35.335135  646435 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:12:35.335359  646435 main.go:141] libmachine: (test-preload-051001) Calling .DriverName
	I0520 14:12:35.335532  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetState
	I0520 14:12:35.337519  646435 fix.go:112] recreateIfNeeded on test-preload-051001: state=Stopped err=<nil>
	I0520 14:12:35.337545  646435 main.go:141] libmachine: (test-preload-051001) Calling .DriverName
	W0520 14:12:35.337730  646435 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 14:12:35.342019  646435 out.go:177] * Restarting existing kvm2 VM for "test-preload-051001" ...
	I0520 14:12:35.344172  646435 main.go:141] libmachine: (test-preload-051001) Calling .Start
	I0520 14:12:35.344389  646435 main.go:141] libmachine: (test-preload-051001) Ensuring networks are active...
	I0520 14:12:35.345223  646435 main.go:141] libmachine: (test-preload-051001) Ensuring network default is active
	I0520 14:12:35.345548  646435 main.go:141] libmachine: (test-preload-051001) Ensuring network mk-test-preload-051001 is active
	I0520 14:12:35.345857  646435 main.go:141] libmachine: (test-preload-051001) Getting domain xml...
	I0520 14:12:35.346477  646435 main.go:141] libmachine: (test-preload-051001) Creating domain...
	I0520 14:12:36.539467  646435 main.go:141] libmachine: (test-preload-051001) Waiting to get IP...
	I0520 14:12:36.540538  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:36.540897  646435 main.go:141] libmachine: (test-preload-051001) DBG | unable to find current IP address of domain test-preload-051001 in network mk-test-preload-051001
	I0520 14:12:36.540975  646435 main.go:141] libmachine: (test-preload-051001) DBG | I0520 14:12:36.540883  646503 retry.go:31] will retry after 211.163779ms: waiting for machine to come up
	I0520 14:12:36.753513  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:36.754085  646435 main.go:141] libmachine: (test-preload-051001) DBG | unable to find current IP address of domain test-preload-051001 in network mk-test-preload-051001
	I0520 14:12:36.754149  646435 main.go:141] libmachine: (test-preload-051001) DBG | I0520 14:12:36.754053  646503 retry.go:31] will retry after 244.168052ms: waiting for machine to come up
	I0520 14:12:36.999539  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:37.000003  646435 main.go:141] libmachine: (test-preload-051001) DBG | unable to find current IP address of domain test-preload-051001 in network mk-test-preload-051001
	I0520 14:12:37.000029  646435 main.go:141] libmachine: (test-preload-051001) DBG | I0520 14:12:36.999940  646503 retry.go:31] will retry after 419.674951ms: waiting for machine to come up
	I0520 14:12:37.421573  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:37.421980  646435 main.go:141] libmachine: (test-preload-051001) DBG | unable to find current IP address of domain test-preload-051001 in network mk-test-preload-051001
	I0520 14:12:37.422013  646435 main.go:141] libmachine: (test-preload-051001) DBG | I0520 14:12:37.421921  646503 retry.go:31] will retry after 584.764857ms: waiting for machine to come up
	I0520 14:12:38.008796  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:38.009218  646435 main.go:141] libmachine: (test-preload-051001) DBG | unable to find current IP address of domain test-preload-051001 in network mk-test-preload-051001
	I0520 14:12:38.009261  646435 main.go:141] libmachine: (test-preload-051001) DBG | I0520 14:12:38.009172  646503 retry.go:31] will retry after 542.535855ms: waiting for machine to come up
	I0520 14:12:38.553011  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:38.553170  646435 main.go:141] libmachine: (test-preload-051001) DBG | unable to find current IP address of domain test-preload-051001 in network mk-test-preload-051001
	I0520 14:12:38.553193  646435 main.go:141] libmachine: (test-preload-051001) DBG | I0520 14:12:38.553113  646503 retry.go:31] will retry after 898.079922ms: waiting for machine to come up
	I0520 14:12:39.453201  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:39.453586  646435 main.go:141] libmachine: (test-preload-051001) DBG | unable to find current IP address of domain test-preload-051001 in network mk-test-preload-051001
	I0520 14:12:39.453614  646435 main.go:141] libmachine: (test-preload-051001) DBG | I0520 14:12:39.453544  646503 retry.go:31] will retry after 1.029660343s: waiting for machine to come up
	I0520 14:12:40.485150  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:40.485687  646435 main.go:141] libmachine: (test-preload-051001) DBG | unable to find current IP address of domain test-preload-051001 in network mk-test-preload-051001
	I0520 14:12:40.485712  646435 main.go:141] libmachine: (test-preload-051001) DBG | I0520 14:12:40.485633  646503 retry.go:31] will retry after 1.408582935s: waiting for machine to come up
	I0520 14:12:41.895549  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:41.896099  646435 main.go:141] libmachine: (test-preload-051001) DBG | unable to find current IP address of domain test-preload-051001 in network mk-test-preload-051001
	I0520 14:12:41.896133  646435 main.go:141] libmachine: (test-preload-051001) DBG | I0520 14:12:41.896013  646503 retry.go:31] will retry after 1.666892182s: waiting for machine to come up
	I0520 14:12:43.565352  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:43.565795  646435 main.go:141] libmachine: (test-preload-051001) DBG | unable to find current IP address of domain test-preload-051001 in network mk-test-preload-051001
	I0520 14:12:43.565829  646435 main.go:141] libmachine: (test-preload-051001) DBG | I0520 14:12:43.565737  646503 retry.go:31] will retry after 1.831595266s: waiting for machine to come up
	I0520 14:12:45.399315  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:45.399665  646435 main.go:141] libmachine: (test-preload-051001) DBG | unable to find current IP address of domain test-preload-051001 in network mk-test-preload-051001
	I0520 14:12:45.399695  646435 main.go:141] libmachine: (test-preload-051001) DBG | I0520 14:12:45.399622  646503 retry.go:31] will retry after 2.847338867s: waiting for machine to come up
	I0520 14:12:48.250685  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:48.251003  646435 main.go:141] libmachine: (test-preload-051001) DBG | unable to find current IP address of domain test-preload-051001 in network mk-test-preload-051001
	I0520 14:12:48.251103  646435 main.go:141] libmachine: (test-preload-051001) DBG | I0520 14:12:48.250970  646503 retry.go:31] will retry after 2.527710432s: waiting for machine to come up
	I0520 14:12:50.780031  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:50.780593  646435 main.go:141] libmachine: (test-preload-051001) Found IP for machine: 192.168.39.245
	I0520 14:12:50.780620  646435 main.go:141] libmachine: (test-preload-051001) Reserving static IP address...
	I0520 14:12:50.780637  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has current primary IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:50.781098  646435 main.go:141] libmachine: (test-preload-051001) Reserved static IP address: 192.168.39.245
	I0520 14:12:50.781140  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "test-preload-051001", mac: "52:54:00:d6:11:98", ip: "192.168.39.245"} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:12:50.781156  646435 main.go:141] libmachine: (test-preload-051001) Waiting for SSH to be available...
	I0520 14:12:50.781182  646435 main.go:141] libmachine: (test-preload-051001) DBG | skip adding static IP to network mk-test-preload-051001 - found existing host DHCP lease matching {name: "test-preload-051001", mac: "52:54:00:d6:11:98", ip: "192.168.39.245"}
	I0520 14:12:50.781203  646435 main.go:141] libmachine: (test-preload-051001) DBG | Getting to WaitForSSH function...
	I0520 14:12:50.783232  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:50.783538  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:12:50.783571  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:50.783670  646435 main.go:141] libmachine: (test-preload-051001) DBG | Using SSH client type: external
	I0520 14:12:50.783697  646435 main.go:141] libmachine: (test-preload-051001) DBG | Using SSH private key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/test-preload-051001/id_rsa (-rw-------)
	I0520 14:12:50.783736  646435 main.go:141] libmachine: (test-preload-051001) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18929-602525/.minikube/machines/test-preload-051001/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 14:12:50.783779  646435 main.go:141] libmachine: (test-preload-051001) DBG | About to run SSH command:
	I0520 14:12:50.783797  646435 main.go:141] libmachine: (test-preload-051001) DBG | exit 0
	I0520 14:12:50.913123  646435 main.go:141] libmachine: (test-preload-051001) DBG | SSH cmd err, output: <nil>: 
	I0520 14:12:50.913517  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetConfigRaw
	I0520 14:12:50.914172  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetIP
	I0520 14:12:50.916597  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:50.916903  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:12:50.916927  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:50.917156  646435 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/test-preload-051001/config.json ...
	I0520 14:12:50.917421  646435 machine.go:94] provisionDockerMachine start ...
	I0520 14:12:50.917446  646435 main.go:141] libmachine: (test-preload-051001) Calling .DriverName
	I0520 14:12:50.917687  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHHostname
	I0520 14:12:50.919919  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:50.920276  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:12:50.920301  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:50.920420  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHPort
	I0520 14:12:50.920647  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHKeyPath
	I0520 14:12:50.920797  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHKeyPath
	I0520 14:12:50.920924  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHUsername
	I0520 14:12:50.921080  646435 main.go:141] libmachine: Using SSH client type: native
	I0520 14:12:50.921337  646435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0520 14:12:50.921353  646435 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 14:12:51.033561  646435 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 14:12:51.033597  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetMachineName
	I0520 14:12:51.033905  646435 buildroot.go:166] provisioning hostname "test-preload-051001"
	I0520 14:12:51.033936  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetMachineName
	I0520 14:12:51.034163  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHHostname
	I0520 14:12:51.036975  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:51.037458  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:12:51.037484  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:51.037692  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHPort
	I0520 14:12:51.037882  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHKeyPath
	I0520 14:12:51.038087  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHKeyPath
	I0520 14:12:51.038273  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHUsername
	I0520 14:12:51.038460  646435 main.go:141] libmachine: Using SSH client type: native
	I0520 14:12:51.038637  646435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0520 14:12:51.038650  646435 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-051001 && echo "test-preload-051001" | sudo tee /etc/hostname
	I0520 14:12:51.163669  646435 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-051001
	
	I0520 14:12:51.163707  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHHostname
	I0520 14:12:51.166976  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:51.167354  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:12:51.167378  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:51.167611  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHPort
	I0520 14:12:51.167820  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHKeyPath
	I0520 14:12:51.167991  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHKeyPath
	I0520 14:12:51.168151  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHUsername
	I0520 14:12:51.168302  646435 main.go:141] libmachine: Using SSH client type: native
	I0520 14:12:51.168489  646435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0520 14:12:51.168512  646435 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-051001' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-051001/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-051001' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 14:12:51.289221  646435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 14:12:51.289271  646435 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 14:12:51.289293  646435 buildroot.go:174] setting up certificates
	I0520 14:12:51.289307  646435 provision.go:84] configureAuth start
	I0520 14:12:51.289316  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetMachineName
	I0520 14:12:51.289632  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetIP
	I0520 14:12:51.292121  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:51.292548  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:12:51.292573  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:51.292768  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHHostname
	I0520 14:12:51.295284  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:51.295667  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:12:51.295694  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:51.295830  646435 provision.go:143] copyHostCerts
	I0520 14:12:51.295898  646435 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 14:12:51.295924  646435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 14:12:51.296010  646435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 14:12:51.296135  646435 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 14:12:51.296145  646435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 14:12:51.296170  646435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 14:12:51.296223  646435 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 14:12:51.296230  646435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 14:12:51.296251  646435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 14:12:51.296297  646435 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.test-preload-051001 san=[127.0.0.1 192.168.39.245 localhost minikube test-preload-051001]
	I0520 14:12:51.482190  646435 provision.go:177] copyRemoteCerts
	I0520 14:12:51.482257  646435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 14:12:51.482285  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHHostname
	I0520 14:12:51.484867  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:51.485195  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:12:51.485233  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:51.485464  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHPort
	I0520 14:12:51.485668  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHKeyPath
	I0520 14:12:51.485825  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHUsername
	I0520 14:12:51.485944  646435 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/test-preload-051001/id_rsa Username:docker}
	I0520 14:12:51.570390  646435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 14:12:51.592339  646435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 14:12:51.613386  646435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 14:12:51.634131  646435 provision.go:87] duration metric: took 344.808382ms to configureAuth
	I0520 14:12:51.634166  646435 buildroot.go:189] setting minikube options for container-runtime
	I0520 14:12:51.634408  646435 config.go:182] Loaded profile config "test-preload-051001": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0520 14:12:51.634488  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHHostname
	I0520 14:12:51.637543  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:51.637852  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:12:51.637883  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:51.638048  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHPort
	I0520 14:12:51.638282  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHKeyPath
	I0520 14:12:51.638494  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHKeyPath
	I0520 14:12:51.638620  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHUsername
	I0520 14:12:51.638783  646435 main.go:141] libmachine: Using SSH client type: native
	I0520 14:12:51.638985  646435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0520 14:12:51.639004  646435 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 14:12:51.898715  646435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 14:12:51.898745  646435 machine.go:97] duration metric: took 981.309994ms to provisionDockerMachine
	I0520 14:12:51.898759  646435 start.go:293] postStartSetup for "test-preload-051001" (driver="kvm2")
	I0520 14:12:51.898770  646435 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 14:12:51.898785  646435 main.go:141] libmachine: (test-preload-051001) Calling .DriverName
	I0520 14:12:51.899151  646435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 14:12:51.899184  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHHostname
	I0520 14:12:51.901779  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:51.902094  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:12:51.902117  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:51.902311  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHPort
	I0520 14:12:51.902527  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHKeyPath
	I0520 14:12:51.902725  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHUsername
	I0520 14:12:51.902888  646435 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/test-preload-051001/id_rsa Username:docker}
	I0520 14:12:51.987080  646435 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 14:12:51.990874  646435 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 14:12:51.990897  646435 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 14:12:51.990974  646435 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 14:12:51.991043  646435 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 14:12:51.991133  646435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 14:12:51.999340  646435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 14:12:52.021229  646435 start.go:296] duration metric: took 122.450003ms for postStartSetup
	I0520 14:12:52.021300  646435 fix.go:56] duration metric: took 16.702850608s for fixHost
	I0520 14:12:52.021330  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHHostname
	I0520 14:12:52.024272  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:52.024606  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:12:52.024641  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:52.024790  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHPort
	I0520 14:12:52.025029  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHKeyPath
	I0520 14:12:52.025219  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHKeyPath
	I0520 14:12:52.025387  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHUsername
	I0520 14:12:52.025560  646435 main.go:141] libmachine: Using SSH client type: native
	I0520 14:12:52.025727  646435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0520 14:12:52.025738  646435 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 14:12:52.137643  646435 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716214372.109991739
	
	I0520 14:12:52.137675  646435 fix.go:216] guest clock: 1716214372.109991739
	I0520 14:12:52.137687  646435 fix.go:229] Guest: 2024-05-20 14:12:52.109991739 +0000 UTC Remote: 2024-05-20 14:12:52.021305969 +0000 UTC m=+29.126615761 (delta=88.68577ms)
	I0520 14:12:52.137713  646435 fix.go:200] guest clock delta is within tolerance: 88.68577ms
	I0520 14:12:52.137718  646435 start.go:83] releasing machines lock for "test-preload-051001", held for 16.819283885s
	I0520 14:12:52.137745  646435 main.go:141] libmachine: (test-preload-051001) Calling .DriverName
	I0520 14:12:52.138080  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetIP
	I0520 14:12:52.141408  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:52.141756  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:12:52.141786  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:52.141865  646435 main.go:141] libmachine: (test-preload-051001) Calling .DriverName
	I0520 14:12:52.142501  646435 main.go:141] libmachine: (test-preload-051001) Calling .DriverName
	I0520 14:12:52.142716  646435 main.go:141] libmachine: (test-preload-051001) Calling .DriverName
	I0520 14:12:52.142849  646435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 14:12:52.142910  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHHostname
	I0520 14:12:52.142977  646435 ssh_runner.go:195] Run: cat /version.json
	I0520 14:12:52.143003  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHHostname
	I0520 14:12:52.145738  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:52.145909  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:52.146028  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:12:52.146048  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:52.146312  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHPort
	I0520 14:12:52.146383  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:12:52.146406  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:52.146521  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHKeyPath
	I0520 14:12:52.146603  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHPort
	I0520 14:12:52.146708  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHUsername
	I0520 14:12:52.146780  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHKeyPath
	I0520 14:12:52.146873  646435 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/test-preload-051001/id_rsa Username:docker}
	I0520 14:12:52.146892  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHUsername
	I0520 14:12:52.147020  646435 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/test-preload-051001/id_rsa Username:docker}
	W0520 14:12:52.225974  646435 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 14:12:52.226071  646435 ssh_runner.go:195] Run: systemctl --version
	I0520 14:12:52.260738  646435 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 14:12:52.402961  646435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 14:12:52.409643  646435 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 14:12:52.409718  646435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 14:12:52.423988  646435 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 14:12:52.424015  646435 start.go:494] detecting cgroup driver to use...
	I0520 14:12:52.424086  646435 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 14:12:52.439221  646435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 14:12:52.452223  646435 docker.go:217] disabling cri-docker service (if available) ...
	I0520 14:12:52.452291  646435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 14:12:52.465063  646435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 14:12:52.479521  646435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 14:12:52.604691  646435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 14:12:52.732089  646435 docker.go:233] disabling docker service ...
	I0520 14:12:52.732172  646435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 14:12:52.745954  646435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 14:12:52.758143  646435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 14:12:52.886331  646435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 14:12:53.012700  646435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 14:12:53.025982  646435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 14:12:53.042775  646435 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0520 14:12:53.042838  646435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:12:53.052434  646435 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 14:12:53.052495  646435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:12:53.062076  646435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:12:53.071733  646435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:12:53.081280  646435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 14:12:53.091182  646435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:12:53.101378  646435 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:12:53.116738  646435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:12:53.126297  646435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 14:12:53.134828  646435 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 14:12:53.134879  646435 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 14:12:53.147302  646435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 14:12:53.156033  646435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:12:53.274373  646435 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 14:12:53.408898  646435 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 14:12:53.408976  646435 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 14:12:53.413329  646435 start.go:562] Will wait 60s for crictl version
	I0520 14:12:53.413390  646435 ssh_runner.go:195] Run: which crictl
	I0520 14:12:53.416850  646435 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 14:12:53.457410  646435 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 14:12:53.457506  646435 ssh_runner.go:195] Run: crio --version
	I0520 14:12:53.483043  646435 ssh_runner.go:195] Run: crio --version
	I0520 14:12:53.511118  646435 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0520 14:12:53.513320  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetIP
	I0520 14:12:53.516108  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:53.516477  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:12:53.516503  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:12:53.516698  646435 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 14:12:53.520436  646435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 14:12:53.531803  646435 kubeadm.go:877] updating cluster {Name:test-preload-051001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-051001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 14:12:53.531918  646435 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0520 14:12:53.531983  646435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 14:12:53.564646  646435 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0520 14:12:53.564725  646435 ssh_runner.go:195] Run: which lz4
	I0520 14:12:53.568284  646435 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 14:12:53.571931  646435 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 14:12:53.571958  646435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0520 14:12:55.027935  646435 crio.go:462] duration metric: took 1.4596953s to copy over tarball
	I0520 14:12:55.028010  646435 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 14:12:57.303612  646435 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.275576664s)
	I0520 14:12:57.303644  646435 crio.go:469] duration metric: took 2.275677014s to extract the tarball
	I0520 14:12:57.303655  646435 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 14:12:57.344723  646435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 14:12:57.385065  646435 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0520 14:12:57.385101  646435 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 14:12:57.385196  646435 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0520 14:12:57.385212  646435 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 14:12:57.385225  646435 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0520 14:12:57.385234  646435 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0520 14:12:57.385192  646435 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 14:12:57.385285  646435 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0520 14:12:57.385298  646435 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0520 14:12:57.385314  646435 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 14:12:57.386718  646435 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0520 14:12:57.386724  646435 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 14:12:57.386733  646435 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0520 14:12:57.386722  646435 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0520 14:12:57.386762  646435 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 14:12:57.386739  646435 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 14:12:57.386765  646435 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0520 14:12:57.386787  646435 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0520 14:12:57.611838  646435 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0520 14:12:57.614487  646435 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0520 14:12:57.625058  646435 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0520 14:12:57.625588  646435 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0520 14:12:57.630644  646435 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0520 14:12:57.650781  646435 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 14:12:57.677097  646435 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0520 14:12:57.711308  646435 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0520 14:12:57.711348  646435 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0520 14:12:57.711385  646435 ssh_runner.go:195] Run: which crictl
	I0520 14:12:57.719246  646435 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0520 14:12:57.719281  646435 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 14:12:57.719317  646435 ssh_runner.go:195] Run: which crictl
	I0520 14:12:57.773118  646435 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0520 14:12:57.773141  646435 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0520 14:12:57.773170  646435 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0520 14:12:57.773172  646435 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0520 14:12:57.773202  646435 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0520 14:12:57.773224  646435 ssh_runner.go:195] Run: which crictl
	I0520 14:12:57.773226  646435 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 14:12:57.773235  646435 ssh_runner.go:195] Run: which crictl
	I0520 14:12:57.773274  646435 ssh_runner.go:195] Run: which crictl
	I0520 14:12:57.773404  646435 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0520 14:12:57.773433  646435 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0520 14:12:57.773464  646435 ssh_runner.go:195] Run: which crictl
	I0520 14:12:57.789187  646435 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0520 14:12:57.789224  646435 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0520 14:12:57.789265  646435 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0520 14:12:57.789286  646435 ssh_runner.go:195] Run: which crictl
	I0520 14:12:57.789325  646435 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0520 14:12:57.789375  646435 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0520 14:12:57.789423  646435 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 14:12:57.789427  646435 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0520 14:12:57.789453  646435 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0520 14:12:57.913386  646435 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0520 14:12:57.913505  646435 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0520 14:12:57.923353  646435 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0520 14:12:57.923431  646435 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0520 14:12:57.923450  646435 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0520 14:12:57.923463  646435 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0520 14:12:57.923521  646435 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0520 14:12:57.923531  646435 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0520 14:12:57.923521  646435 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0520 14:12:57.923579  646435 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0520 14:12:57.923607  646435 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0520 14:12:57.923648  646435 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0520 14:12:57.923667  646435 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0520 14:12:57.923676  646435 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0520 14:12:57.923704  646435 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0520 14:12:57.923729  646435 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0520 14:12:57.938465  646435 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0520 14:12:57.938531  646435 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0520 14:12:57.938538  646435 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0520 14:12:57.968157  646435 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0520 14:12:57.968228  646435 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0520 14:12:57.968239  646435 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0520 14:12:57.968329  646435 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0520 14:12:58.217519  646435 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 14:13:01.197231  646435 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.273499915s)
	I0520 14:13:01.197285  646435 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0520 14:13:01.197299  646435 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4: (3.228945366s)
	I0520 14:13:01.197308  646435 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0520 14:13:01.197330  646435 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0520 14:13:01.197368  646435 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0520 14:13:01.197381  646435 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.979823181s)
	I0520 14:13:01.638181  646435 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0520 14:13:01.638239  646435 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0520 14:13:01.638327  646435 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0520 14:13:02.482544  646435 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0520 14:13:02.482591  646435 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0520 14:13:02.482650  646435 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0520 14:13:04.729080  646435 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.246376682s)
	I0520 14:13:04.729120  646435 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0520 14:13:04.729151  646435 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0520 14:13:04.729208  646435 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0520 14:13:04.868042  646435 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0520 14:13:04.868098  646435 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0520 14:13:04.868156  646435 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0520 14:13:05.618460  646435 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0520 14:13:05.618511  646435 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0520 14:13:05.618564  646435 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0520 14:13:06.056464  646435 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0520 14:13:06.056527  646435 cache_images.go:123] Successfully loaded all cached images
	I0520 14:13:06.056535  646435 cache_images.go:92] duration metric: took 8.67141323s to LoadCachedImages
	I0520 14:13:06.056553  646435 kubeadm.go:928] updating node { 192.168.39.245 8443 v1.24.4 crio true true} ...
	I0520 14:13:06.056707  646435 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-051001 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-051001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 14:13:06.056812  646435 ssh_runner.go:195] Run: crio config
	I0520 14:13:06.105511  646435 cni.go:84] Creating CNI manager for ""
	I0520 14:13:06.105536  646435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 14:13:06.105556  646435 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 14:13:06.105575  646435 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.245 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-051001 NodeName:test-preload-051001 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 14:13:06.105746  646435 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-051001"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 14:13:06.105834  646435 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0520 14:13:06.115237  646435 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 14:13:06.115314  646435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 14:13:06.124086  646435 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0520 14:13:06.139240  646435 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 14:13:06.155017  646435 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0520 14:13:06.171212  646435 ssh_runner.go:195] Run: grep 192.168.39.245	control-plane.minikube.internal$ /etc/hosts
	I0520 14:13:06.174810  646435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 14:13:06.185716  646435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:13:06.298499  646435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 14:13:06.314569  646435 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/test-preload-051001 for IP: 192.168.39.245
	I0520 14:13:06.314602  646435 certs.go:194] generating shared ca certs ...
	I0520 14:13:06.314623  646435 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:13:06.314796  646435 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 14:13:06.314840  646435 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 14:13:06.314850  646435 certs.go:256] generating profile certs ...
	I0520 14:13:06.314965  646435 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/test-preload-051001/client.key
	I0520 14:13:06.315042  646435 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/test-preload-051001/apiserver.key.6282b67e
	I0520 14:13:06.315087  646435 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/test-preload-051001/proxy-client.key
	I0520 14:13:06.315196  646435 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem (1338 bytes)
	W0520 14:13:06.315223  646435 certs.go:480] ignoring /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867_empty.pem, impossibly tiny 0 bytes
	I0520 14:13:06.315232  646435 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 14:13:06.315267  646435 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 14:13:06.315301  646435 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 14:13:06.315326  646435 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 14:13:06.315382  646435 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 14:13:06.316129  646435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 14:13:06.350382  646435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 14:13:06.384523  646435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 14:13:06.429868  646435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 14:13:06.461132  646435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/test-preload-051001/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0520 14:13:06.492364  646435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/test-preload-051001/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 14:13:06.529130  646435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/test-preload-051001/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 14:13:06.550931  646435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/test-preload-051001/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 14:13:06.573305  646435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 14:13:06.594576  646435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem --> /usr/share/ca-certificates/609867.pem (1338 bytes)
	I0520 14:13:06.616422  646435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /usr/share/ca-certificates/6098672.pem (1708 bytes)
	I0520 14:13:06.639514  646435 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 14:13:06.656804  646435 ssh_runner.go:195] Run: openssl version
	I0520 14:13:06.662304  646435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6098672.pem && ln -fs /usr/share/ca-certificates/6098672.pem /etc/ssl/certs/6098672.pem"
	I0520 14:13:06.672340  646435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6098672.pem
	I0520 14:13:06.676433  646435 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 14:13:06.676501  646435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6098672.pem
	I0520 14:13:06.681936  646435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6098672.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 14:13:06.692304  646435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 14:13:06.702770  646435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:13:06.706967  646435 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:13:06.707026  646435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:13:06.712527  646435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 14:13:06.722934  646435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/609867.pem && ln -fs /usr/share/ca-certificates/609867.pem /etc/ssl/certs/609867.pem"
	I0520 14:13:06.733377  646435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/609867.pem
	I0520 14:13:06.737384  646435 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 14:13:06.737426  646435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/609867.pem
	I0520 14:13:06.742588  646435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/609867.pem /etc/ssl/certs/51391683.0"
	I0520 14:13:06.752148  646435 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 14:13:06.756149  646435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 14:13:06.761969  646435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 14:13:06.767732  646435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 14:13:06.773683  646435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 14:13:06.779054  646435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 14:13:06.784549  646435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 14:13:06.789918  646435 kubeadm.go:391] StartCluster: {Name:test-preload-051001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-051001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:13:06.790008  646435 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 14:13:06.790048  646435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 14:13:06.824636  646435 cri.go:89] found id: ""
	I0520 14:13:06.824751  646435 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 14:13:06.834549  646435 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 14:13:06.834578  646435 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 14:13:06.834585  646435 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 14:13:06.834632  646435 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 14:13:06.843950  646435 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 14:13:06.844461  646435 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-051001" does not appear in /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 14:13:06.844584  646435 kubeconfig.go:62] /home/jenkins/minikube-integration/18929-602525/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-051001" cluster setting kubeconfig missing "test-preload-051001" context setting]
	I0520 14:13:06.844920  646435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/kubeconfig: {Name:mkf2ddc28a03db07b0a9823212c8450f3ae4cfa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:13:06.845639  646435 kapi.go:59] client config for test-preload-051001: &rest.Config{Host:"https://192.168.39.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/test-preload-051001/client.crt", KeyFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/test-preload-051001/client.key", CAFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 14:13:06.846339  646435 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 14:13:06.855343  646435 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.245
	I0520 14:13:06.855381  646435 kubeadm.go:1154] stopping kube-system containers ...
	I0520 14:13:06.855398  646435 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 14:13:06.855453  646435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 14:13:06.889771  646435 cri.go:89] found id: ""
	I0520 14:13:06.889860  646435 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 14:13:06.905956  646435 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 14:13:06.914963  646435 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 14:13:06.914986  646435 kubeadm.go:156] found existing configuration files:
	
	I0520 14:13:06.915050  646435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 14:13:06.923415  646435 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 14:13:06.923468  646435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 14:13:06.932738  646435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 14:13:06.940656  646435 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 14:13:06.940720  646435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 14:13:06.949077  646435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 14:13:06.956994  646435 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 14:13:06.957053  646435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 14:13:06.966269  646435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 14:13:06.974489  646435 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 14:13:06.974560  646435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 14:13:06.983395  646435 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 14:13:06.992460  646435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 14:13:07.094134  646435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 14:13:07.835866  646435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 14:13:08.095291  646435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 14:13:08.160381  646435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 14:13:08.240286  646435 api_server.go:52] waiting for apiserver process to appear ...
	I0520 14:13:08.240393  646435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 14:13:08.740694  646435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 14:13:09.241278  646435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 14:13:09.260109  646435 api_server.go:72] duration metric: took 1.019832438s to wait for apiserver process to appear ...
	I0520 14:13:09.260147  646435 api_server.go:88] waiting for apiserver healthz status ...
	I0520 14:13:09.260170  646435 api_server.go:253] Checking apiserver healthz at https://192.168.39.245:8443/healthz ...
	I0520 14:13:09.260741  646435 api_server.go:269] stopped: https://192.168.39.245:8443/healthz: Get "https://192.168.39.245:8443/healthz": dial tcp 192.168.39.245:8443: connect: connection refused
	I0520 14:13:09.760556  646435 api_server.go:253] Checking apiserver healthz at https://192.168.39.245:8443/healthz ...
	I0520 14:13:13.515556  646435 api_server.go:279] https://192.168.39.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 14:13:13.515595  646435 api_server.go:103] status: https://192.168.39.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 14:13:13.515626  646435 api_server.go:253] Checking apiserver healthz at https://192.168.39.245:8443/healthz ...
	I0520 14:13:13.558670  646435 api_server.go:279] https://192.168.39.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 14:13:13.558706  646435 api_server.go:103] status: https://192.168.39.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 14:13:13.761088  646435 api_server.go:253] Checking apiserver healthz at https://192.168.39.245:8443/healthz ...
	I0520 14:13:13.766563  646435 api_server.go:279] https://192.168.39.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 14:13:13.766593  646435 api_server.go:103] status: https://192.168.39.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 14:13:14.261194  646435 api_server.go:253] Checking apiserver healthz at https://192.168.39.245:8443/healthz ...
	I0520 14:13:14.267559  646435 api_server.go:279] https://192.168.39.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 14:13:14.267592  646435 api_server.go:103] status: https://192.168.39.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 14:13:14.761261  646435 api_server.go:253] Checking apiserver healthz at https://192.168.39.245:8443/healthz ...
	I0520 14:13:14.766479  646435 api_server.go:279] https://192.168.39.245:8443/healthz returned 200:
	ok
	I0520 14:13:14.772510  646435 api_server.go:141] control plane version: v1.24.4
	I0520 14:13:14.772537  646435 api_server.go:131] duration metric: took 5.512383732s to wait for apiserver health ...
	I0520 14:13:14.772550  646435 cni.go:84] Creating CNI manager for ""
	I0520 14:13:14.772556  646435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 14:13:14.775415  646435 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 14:13:14.777786  646435 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 14:13:14.788702  646435 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 14:13:14.806986  646435 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 14:13:14.815712  646435 system_pods.go:59] 7 kube-system pods found
	I0520 14:13:14.815753  646435 system_pods.go:61] "coredns-6d4b75cb6d-kxc4t" [73f0427c-c9c3-423c-8f9e-853d1499d1f4] Running
	I0520 14:13:14.815764  646435 system_pods.go:61] "etcd-test-preload-051001" [518385c2-0209-4bce-80ce-225b20f9c8f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 14:13:14.815771  646435 system_pods.go:61] "kube-apiserver-test-preload-051001" [106b932c-2be4-43aa-9558-a0226597ec78] Running
	I0520 14:13:14.815778  646435 system_pods.go:61] "kube-controller-manager-test-preload-051001" [67e73504-c276-42c9-978c-398998d2ad39] Running
	I0520 14:13:14.815782  646435 system_pods.go:61] "kube-proxy-526p8" [0b833eee-e2b6-45c5-b0df-860dbde5c870] Running
	I0520 14:13:14.815790  646435 system_pods.go:61] "kube-scheduler-test-preload-051001" [9a6d4b41-7351-482d-97a9-4a3ee0c0f5ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 14:13:14.815801  646435 system_pods.go:61] "storage-provisioner" [15e7a072-7463-46e8-bace-b22469bbaccc] Running
	I0520 14:13:14.815811  646435 system_pods.go:74] duration metric: took 8.788217ms to wait for pod list to return data ...
	I0520 14:13:14.815821  646435 node_conditions.go:102] verifying NodePressure condition ...
	I0520 14:13:14.818986  646435 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 14:13:14.819018  646435 node_conditions.go:123] node cpu capacity is 2
	I0520 14:13:14.819032  646435 node_conditions.go:105] duration metric: took 3.204436ms to run NodePressure ...
	I0520 14:13:14.819053  646435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 14:13:15.010904  646435 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 14:13:15.015412  646435 kubeadm.go:733] kubelet initialised
	I0520 14:13:15.015444  646435 kubeadm.go:734] duration metric: took 4.506477ms waiting for restarted kubelet to initialise ...
	I0520 14:13:15.015455  646435 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 14:13:15.019617  646435 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-kxc4t" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:15.025562  646435 pod_ready.go:97] node "test-preload-051001" hosting pod "coredns-6d4b75cb6d-kxc4t" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-051001" has status "Ready":"False"
	I0520 14:13:15.025588  646435 pod_ready.go:81] duration metric: took 5.948941ms for pod "coredns-6d4b75cb6d-kxc4t" in "kube-system" namespace to be "Ready" ...
	E0520 14:13:15.025600  646435 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-051001" hosting pod "coredns-6d4b75cb6d-kxc4t" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-051001" has status "Ready":"False"
	I0520 14:13:15.025609  646435 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-051001" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:15.031271  646435 pod_ready.go:97] node "test-preload-051001" hosting pod "etcd-test-preload-051001" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-051001" has status "Ready":"False"
	I0520 14:13:15.031302  646435 pod_ready.go:81] duration metric: took 5.681474ms for pod "etcd-test-preload-051001" in "kube-system" namespace to be "Ready" ...
	E0520 14:13:15.031328  646435 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-051001" hosting pod "etcd-test-preload-051001" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-051001" has status "Ready":"False"
	I0520 14:13:15.031346  646435 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-051001" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:15.038118  646435 pod_ready.go:97] node "test-preload-051001" hosting pod "kube-apiserver-test-preload-051001" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-051001" has status "Ready":"False"
	I0520 14:13:15.038143  646435 pod_ready.go:81] duration metric: took 6.78462ms for pod "kube-apiserver-test-preload-051001" in "kube-system" namespace to be "Ready" ...
	E0520 14:13:15.038151  646435 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-051001" hosting pod "kube-apiserver-test-preload-051001" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-051001" has status "Ready":"False"
	I0520 14:13:15.038156  646435 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-051001" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:15.212074  646435 pod_ready.go:97] node "test-preload-051001" hosting pod "kube-controller-manager-test-preload-051001" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-051001" has status "Ready":"False"
	I0520 14:13:15.212103  646435 pod_ready.go:81] duration metric: took 173.934053ms for pod "kube-controller-manager-test-preload-051001" in "kube-system" namespace to be "Ready" ...
	E0520 14:13:15.212116  646435 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-051001" hosting pod "kube-controller-manager-test-preload-051001" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-051001" has status "Ready":"False"
	I0520 14:13:15.212123  646435 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-526p8" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:15.611519  646435 pod_ready.go:97] node "test-preload-051001" hosting pod "kube-proxy-526p8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-051001" has status "Ready":"False"
	I0520 14:13:15.611548  646435 pod_ready.go:81] duration metric: took 399.414095ms for pod "kube-proxy-526p8" in "kube-system" namespace to be "Ready" ...
	E0520 14:13:15.611560  646435 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-051001" hosting pod "kube-proxy-526p8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-051001" has status "Ready":"False"
	I0520 14:13:15.611568  646435 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-051001" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:16.012056  646435 pod_ready.go:97] node "test-preload-051001" hosting pod "kube-scheduler-test-preload-051001" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-051001" has status "Ready":"False"
	I0520 14:13:16.012099  646435 pod_ready.go:81] duration metric: took 400.521853ms for pod "kube-scheduler-test-preload-051001" in "kube-system" namespace to be "Ready" ...
	E0520 14:13:16.012113  646435 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-051001" hosting pod "kube-scheduler-test-preload-051001" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-051001" has status "Ready":"False"
	I0520 14:13:16.012123  646435 pod_ready.go:38] duration metric: took 996.648442ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 14:13:16.012162  646435 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 14:13:16.023469  646435 ops.go:34] apiserver oom_adj: -16
	I0520 14:13:16.023499  646435 kubeadm.go:591] duration metric: took 9.188907317s to restartPrimaryControlPlane
	I0520 14:13:16.023510  646435 kubeadm.go:393] duration metric: took 9.233599045s to StartCluster
	I0520 14:13:16.023534  646435 settings.go:142] acquiring lock: {Name:mkfd2b5ade573ba8a1562334a87bc71c9f86de47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:13:16.023621  646435 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 14:13:16.024561  646435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/kubeconfig: {Name:mkf2ddc28a03db07b0a9823212c8450f3ae4cfa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:13:16.024897  646435 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 14:13:16.027743  646435 out.go:177] * Verifying Kubernetes components...
	I0520 14:13:16.025104  646435 config.go:182] Loaded profile config "test-preload-051001": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0520 14:13:16.025045  646435 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 14:13:16.027793  646435 addons.go:69] Setting storage-provisioner=true in profile "test-preload-051001"
	I0520 14:13:16.027822  646435 addons.go:234] Setting addon storage-provisioner=true in "test-preload-051001"
	W0520 14:13:16.027835  646435 addons.go:243] addon storage-provisioner should already be in state true
	I0520 14:13:16.030315  646435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:13:16.027870  646435 host.go:66] Checking if "test-preload-051001" exists ...
	I0520 14:13:16.027880  646435 addons.go:69] Setting default-storageclass=true in profile "test-preload-051001"
	I0520 14:13:16.030442  646435 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-051001"
	I0520 14:13:16.030788  646435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:13:16.030819  646435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:13:16.030842  646435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:13:16.030851  646435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:13:16.045949  646435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0520 14:13:16.046306  646435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33263
	I0520 14:13:16.046428  646435 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:13:16.046773  646435 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:13:16.046983  646435 main.go:141] libmachine: Using API Version  1
	I0520 14:13:16.047006  646435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:13:16.047260  646435 main.go:141] libmachine: Using API Version  1
	I0520 14:13:16.047286  646435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:13:16.047344  646435 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:13:16.047542  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetState
	I0520 14:13:16.047598  646435 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:13:16.048155  646435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:13:16.048197  646435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:13:16.050209  646435 kapi.go:59] client config for test-preload-051001: &rest.Config{Host:"https://192.168.39.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/test-preload-051001/client.crt", KeyFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/test-preload-051001/client.key", CAFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 14:13:16.050547  646435 addons.go:234] Setting addon default-storageclass=true in "test-preload-051001"
	W0520 14:13:16.050566  646435 addons.go:243] addon default-storageclass should already be in state true
	I0520 14:13:16.050604  646435 host.go:66] Checking if "test-preload-051001" exists ...
	I0520 14:13:16.050954  646435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:13:16.050993  646435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:13:16.063435  646435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39639
	I0520 14:13:16.064010  646435 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:13:16.064588  646435 main.go:141] libmachine: Using API Version  1
	I0520 14:13:16.064619  646435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:13:16.064937  646435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
	I0520 14:13:16.064987  646435 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:13:16.065197  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetState
	I0520 14:13:16.065446  646435 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:13:16.065975  646435 main.go:141] libmachine: Using API Version  1
	I0520 14:13:16.066000  646435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:13:16.066330  646435 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:13:16.066947  646435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:13:16.066961  646435 main.go:141] libmachine: (test-preload-051001) Calling .DriverName
	I0520 14:13:16.067020  646435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:13:16.070009  646435 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 14:13:16.072478  646435 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 14:13:16.072502  646435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 14:13:16.072524  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHHostname
	I0520 14:13:16.075863  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:13:16.076562  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:13:16.076597  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:13:16.076795  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHPort
	I0520 14:13:16.077027  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHKeyPath
	I0520 14:13:16.077275  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHUsername
	I0520 14:13:16.077449  646435 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/test-preload-051001/id_rsa Username:docker}
	I0520 14:13:16.082854  646435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0520 14:13:16.083293  646435 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:13:16.083788  646435 main.go:141] libmachine: Using API Version  1
	I0520 14:13:16.083811  646435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:13:16.084213  646435 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:13:16.084415  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetState
	I0520 14:13:16.085987  646435 main.go:141] libmachine: (test-preload-051001) Calling .DriverName
	I0520 14:13:16.086355  646435 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 14:13:16.086371  646435 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 14:13:16.086386  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHHostname
	I0520 14:13:16.089050  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:13:16.089529  646435 main.go:141] libmachine: (test-preload-051001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:11:98", ip: ""} in network mk-test-preload-051001: {Iface:virbr1 ExpiryTime:2024-05-20 15:10:48 +0000 UTC Type:0 Mac:52:54:00:d6:11:98 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:test-preload-051001 Clientid:01:52:54:00:d6:11:98}
	I0520 14:13:16.089557  646435 main.go:141] libmachine: (test-preload-051001) DBG | domain test-preload-051001 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:11:98 in network mk-test-preload-051001
	I0520 14:13:16.089708  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHPort
	I0520 14:13:16.089924  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHKeyPath
	I0520 14:13:16.090103  646435 main.go:141] libmachine: (test-preload-051001) Calling .GetSSHUsername
	I0520 14:13:16.090261  646435 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/test-preload-051001/id_rsa Username:docker}
	I0520 14:13:16.203651  646435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 14:13:16.219775  646435 node_ready.go:35] waiting up to 6m0s for node "test-preload-051001" to be "Ready" ...
	I0520 14:13:16.331926  646435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 14:13:16.346596  646435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 14:13:17.357796  646435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.011150056s)
	I0520 14:13:17.357850  646435 main.go:141] libmachine: Making call to close driver server
	I0520 14:13:17.357862  646435 main.go:141] libmachine: (test-preload-051001) Calling .Close
	I0520 14:13:17.357906  646435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.025927459s)
	I0520 14:13:17.357957  646435 main.go:141] libmachine: Making call to close driver server
	I0520 14:13:17.357973  646435 main.go:141] libmachine: (test-preload-051001) Calling .Close
	I0520 14:13:17.358208  646435 main.go:141] libmachine: Successfully made call to close driver server
	I0520 14:13:17.358225  646435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 14:13:17.358236  646435 main.go:141] libmachine: Making call to close driver server
	I0520 14:13:17.358244  646435 main.go:141] libmachine: (test-preload-051001) Calling .Close
	I0520 14:13:17.358400  646435 main.go:141] libmachine: (test-preload-051001) DBG | Closing plugin on server side
	I0520 14:13:17.358418  646435 main.go:141] libmachine: Successfully made call to close driver server
	I0520 14:13:17.358430  646435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 14:13:17.358445  646435 main.go:141] libmachine: Making call to close driver server
	I0520 14:13:17.358456  646435 main.go:141] libmachine: (test-preload-051001) Calling .Close
	I0520 14:13:17.358518  646435 main.go:141] libmachine: Successfully made call to close driver server
	I0520 14:13:17.358543  646435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 14:13:17.358720  646435 main.go:141] libmachine: Successfully made call to close driver server
	I0520 14:13:17.358741  646435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 14:13:17.358526  646435 main.go:141] libmachine: (test-preload-051001) DBG | Closing plugin on server side
	I0520 14:13:17.365826  646435 main.go:141] libmachine: Making call to close driver server
	I0520 14:13:17.365840  646435 main.go:141] libmachine: (test-preload-051001) Calling .Close
	I0520 14:13:17.366072  646435 main.go:141] libmachine: Successfully made call to close driver server
	I0520 14:13:17.366090  646435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 14:13:17.366124  646435 main.go:141] libmachine: (test-preload-051001) DBG | Closing plugin on server side
	I0520 14:13:17.368814  646435 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 14:13:17.370882  646435 addons.go:505] duration metric: took 1.345859156s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 14:13:18.223778  646435 node_ready.go:53] node "test-preload-051001" has status "Ready":"False"
	I0520 14:13:20.723986  646435 node_ready.go:53] node "test-preload-051001" has status "Ready":"False"
	I0520 14:13:23.223481  646435 node_ready.go:53] node "test-preload-051001" has status "Ready":"False"
	I0520 14:13:24.222901  646435 node_ready.go:49] node "test-preload-051001" has status "Ready":"True"
	I0520 14:13:24.222932  646435 node_ready.go:38] duration metric: took 8.003124463s for node "test-preload-051001" to be "Ready" ...
	I0520 14:13:24.222945  646435 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 14:13:24.228051  646435 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-kxc4t" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:24.232369  646435 pod_ready.go:92] pod "coredns-6d4b75cb6d-kxc4t" in "kube-system" namespace has status "Ready":"True"
	I0520 14:13:24.232389  646435 pod_ready.go:81] duration metric: took 4.314044ms for pod "coredns-6d4b75cb6d-kxc4t" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:24.232396  646435 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-051001" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:25.738516  646435 pod_ready.go:92] pod "etcd-test-preload-051001" in "kube-system" namespace has status "Ready":"True"
	I0520 14:13:25.738539  646435 pod_ready.go:81] duration metric: took 1.506136355s for pod "etcd-test-preload-051001" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:25.738548  646435 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-051001" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:25.743546  646435 pod_ready.go:92] pod "kube-apiserver-test-preload-051001" in "kube-system" namespace has status "Ready":"True"
	I0520 14:13:25.743573  646435 pod_ready.go:81] duration metric: took 5.01698ms for pod "kube-apiserver-test-preload-051001" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:25.743585  646435 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-051001" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:25.751428  646435 pod_ready.go:92] pod "kube-controller-manager-test-preload-051001" in "kube-system" namespace has status "Ready":"True"
	I0520 14:13:25.751451  646435 pod_ready.go:81] duration metric: took 7.856934ms for pod "kube-controller-manager-test-preload-051001" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:25.751461  646435 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-526p8" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:25.827678  646435 pod_ready.go:92] pod "kube-proxy-526p8" in "kube-system" namespace has status "Ready":"True"
	I0520 14:13:25.827706  646435 pod_ready.go:81] duration metric: took 76.238344ms for pod "kube-proxy-526p8" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:25.827719  646435 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-051001" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:26.223496  646435 pod_ready.go:92] pod "kube-scheduler-test-preload-051001" in "kube-system" namespace has status "Ready":"True"
	I0520 14:13:26.223521  646435 pod_ready.go:81] duration metric: took 395.795179ms for pod "kube-scheduler-test-preload-051001" in "kube-system" namespace to be "Ready" ...
	I0520 14:13:26.223532  646435 pod_ready.go:38] duration metric: took 2.000575219s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 14:13:26.223555  646435 api_server.go:52] waiting for apiserver process to appear ...
	I0520 14:13:26.223614  646435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 14:13:26.239041  646435 api_server.go:72] duration metric: took 10.214097898s to wait for apiserver process to appear ...
	I0520 14:13:26.239094  646435 api_server.go:88] waiting for apiserver healthz status ...
	I0520 14:13:26.239177  646435 api_server.go:253] Checking apiserver healthz at https://192.168.39.245:8443/healthz ...
	I0520 14:13:26.244934  646435 api_server.go:279] https://192.168.39.245:8443/healthz returned 200:
	ok
	I0520 14:13:26.245865  646435 api_server.go:141] control plane version: v1.24.4
	I0520 14:13:26.245887  646435 api_server.go:131] duration metric: took 6.785555ms to wait for apiserver health ...
	I0520 14:13:26.245895  646435 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 14:13:26.426723  646435 system_pods.go:59] 7 kube-system pods found
	I0520 14:13:26.426753  646435 system_pods.go:61] "coredns-6d4b75cb6d-kxc4t" [73f0427c-c9c3-423c-8f9e-853d1499d1f4] Running
	I0520 14:13:26.426758  646435 system_pods.go:61] "etcd-test-preload-051001" [518385c2-0209-4bce-80ce-225b20f9c8f1] Running
	I0520 14:13:26.426762  646435 system_pods.go:61] "kube-apiserver-test-preload-051001" [106b932c-2be4-43aa-9558-a0226597ec78] Running
	I0520 14:13:26.426766  646435 system_pods.go:61] "kube-controller-manager-test-preload-051001" [67e73504-c276-42c9-978c-398998d2ad39] Running
	I0520 14:13:26.426769  646435 system_pods.go:61] "kube-proxy-526p8" [0b833eee-e2b6-45c5-b0df-860dbde5c870] Running
	I0520 14:13:26.426771  646435 system_pods.go:61] "kube-scheduler-test-preload-051001" [9a6d4b41-7351-482d-97a9-4a3ee0c0f5ce] Running
	I0520 14:13:26.426774  646435 system_pods.go:61] "storage-provisioner" [15e7a072-7463-46e8-bace-b22469bbaccc] Running
	I0520 14:13:26.426780  646435 system_pods.go:74] duration metric: took 180.879569ms to wait for pod list to return data ...
	I0520 14:13:26.426788  646435 default_sa.go:34] waiting for default service account to be created ...
	I0520 14:13:26.622437  646435 default_sa.go:45] found service account: "default"
	I0520 14:13:26.622466  646435 default_sa.go:55] duration metric: took 195.672214ms for default service account to be created ...
	I0520 14:13:26.622476  646435 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 14:13:26.825932  646435 system_pods.go:86] 7 kube-system pods found
	I0520 14:13:26.825969  646435 system_pods.go:89] "coredns-6d4b75cb6d-kxc4t" [73f0427c-c9c3-423c-8f9e-853d1499d1f4] Running
	I0520 14:13:26.825975  646435 system_pods.go:89] "etcd-test-preload-051001" [518385c2-0209-4bce-80ce-225b20f9c8f1] Running
	I0520 14:13:26.825979  646435 system_pods.go:89] "kube-apiserver-test-preload-051001" [106b932c-2be4-43aa-9558-a0226597ec78] Running
	I0520 14:13:26.825983  646435 system_pods.go:89] "kube-controller-manager-test-preload-051001" [67e73504-c276-42c9-978c-398998d2ad39] Running
	I0520 14:13:26.825993  646435 system_pods.go:89] "kube-proxy-526p8" [0b833eee-e2b6-45c5-b0df-860dbde5c870] Running
	I0520 14:13:26.825997  646435 system_pods.go:89] "kube-scheduler-test-preload-051001" [9a6d4b41-7351-482d-97a9-4a3ee0c0f5ce] Running
	I0520 14:13:26.826000  646435 system_pods.go:89] "storage-provisioner" [15e7a072-7463-46e8-bace-b22469bbaccc] Running
	I0520 14:13:26.826007  646435 system_pods.go:126] duration metric: took 203.525113ms to wait for k8s-apps to be running ...
	I0520 14:13:26.826015  646435 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 14:13:26.826058  646435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 14:13:26.840793  646435 system_svc.go:56] duration metric: took 14.770085ms WaitForService to wait for kubelet
	I0520 14:13:26.840830  646435 kubeadm.go:576] duration metric: took 10.815894312s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 14:13:26.840856  646435 node_conditions.go:102] verifying NodePressure condition ...
	I0520 14:13:27.022794  646435 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 14:13:27.022821  646435 node_conditions.go:123] node cpu capacity is 2
	I0520 14:13:27.022832  646435 node_conditions.go:105] duration metric: took 181.971198ms to run NodePressure ...
	I0520 14:13:27.022844  646435 start.go:240] waiting for startup goroutines ...
	I0520 14:13:27.022850  646435 start.go:245] waiting for cluster config update ...
	I0520 14:13:27.022860  646435 start.go:254] writing updated cluster config ...
	I0520 14:13:27.023187  646435 ssh_runner.go:195] Run: rm -f paused
	I0520 14:13:27.070886  646435 start.go:600] kubectl: 1.30.1, cluster: 1.24.4 (minor skew: 6)
	I0520 14:13:27.073596  646435 out.go:177] 
	W0520 14:13:27.075851  646435 out.go:239] ! /usr/local/bin/kubectl is version 1.30.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0520 14:13:27.078107  646435 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0520 14:13:27.080610  646435 out.go:177] * Done! kubectl is now configured to use "test-preload-051001" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 14:13:27 test-preload-051001 crio[691]: time="2024-05-20 14:13:27.986954480Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716214407986933393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57c554b3-431b-4be3-9768-d9925761735c name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:13:27 test-preload-051001 crio[691]: time="2024-05-20 14:13:27.987477897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d160713-7ac1-4aae-8893-0de6540f8df9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:13:27 test-preload-051001 crio[691]: time="2024-05-20 14:13:27.987528672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d160713-7ac1-4aae-8893-0de6540f8df9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:13:27 test-preload-051001 crio[691]: time="2024-05-20 14:13:27.987738072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee393334499e1ab20e283d3fc33ceafcddead35b0913ec24296acd1206da44c9,PodSandboxId:507f1d41c992014e92ec3c965547a80d8df25cd736cb04a79043acb1cb00e349,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1716214402433009369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-kxc4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73f0427c-c9c3-423c-8f9e-853d1499d1f4,},Annotations:map[string]string{io.kubernetes.container.hash: 9756d59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aaea3d71288be92fe6a988274638d5dd30925f5f9396d41cb040c737db894e2,PodSandboxId:51aceef7b6319a97a9db11d0ec53a7674ed9fb76c0a142b1a7e7db41262608e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716214396385462356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 15e7a072-7463-46e8-bace-b22469bbaccc,},Annotations:map[string]string{io.kubernetes.container.hash: 7a388eb7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7277d62e4bd756d0d1dd445904efbebb055c4c0d29935545ccc86d523520cfda,PodSandboxId:51aceef7b6319a97a9db11d0ec53a7674ed9fb76c0a142b1a7e7db41262608e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716214395259870191,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 15e7a072-7463-46e8-bace-b22469bbaccc,},Annotations:map[string]string{io.kubernetes.container.hash: 7a388eb7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52861cf00bda5e22c15196d669db511a91ed33d3ab74d2d6274ef82c304532c5,PodSandboxId:d3d6aac6fc6508c99ac8db89bb08215650bd73131696620c11aeb6afb1a5d28c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1716214395267084147,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-526p8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b833eee-e2b6-45
c5-b0df-860dbde5c870,},Annotations:map[string]string{io.kubernetes.container.hash: 4c3f7d0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:110d35b5cb2ea2d5719f54eb6213ab35b262e6475fdf4c6e05e52afa2ab4be5e,PodSandboxId:45b406a8ed0295f25a3c6019708e322ae114624460d6f1f8f76665c24bde8871,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1716214388994715642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-051001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eadceead4ea84924ddd04a3
c09dbf4c0,},Annotations:map[string]string{io.kubernetes.container.hash: e9909b97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31d64d4fede758e051d73c624603979dbc015ca07859bccabc802088d4fd1a0f,PodSandboxId:31dce039c96e4df0a0aab9b483dea96e9220ef976f7320fc36503a515f07eb00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1716214388966936965,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-051001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7fbcc0432346ed7e25e712fcd7a47d,},Annotations:map[string]string
{io.kubernetes.container.hash: 97f7c454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59181a48f2e103d1976d8e0a4e6721a347f8973777479729f938de2c89e8ee3,PodSandboxId:dd0762cfd562e904e963c39701f82740a15739e0dadcb90b319181ab45fbb63f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1716214388976735602,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-051001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 296dc699c271732f799e6fbc1b8f0a53,},Annotations:map[string]string{io.kuberne
tes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fbf0da1a66c9bd5899503b8b98ebcbd25e1db94f7054e227bcdd993e630f0e7,PodSandboxId:78e73dbf49f5cc83db06664bd068bd6017b397c868c2526b0250ff04a644adcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1716214388906990781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-051001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b9e990ce65cbd297883064e32925781,},Annotations:map[string]s
tring{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d160713-7ac1-4aae-8893-0de6540f8df9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.022971049Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c43a94e-66d3-430f-9aae-b236dbac1313 name=/runtime.v1.RuntimeService/Version
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.023043734Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c43a94e-66d3-430f-9aae-b236dbac1313 name=/runtime.v1.RuntimeService/Version
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.024054652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0e0a100-66cc-476f-a8b2-72ab804fd73b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.024550674Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716214408024526171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0e0a100-66cc-476f-a8b2-72ab804fd73b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.025005685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=330f9042-48aa-4247-b0ee-a41259ce93aa name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.025054771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=330f9042-48aa-4247-b0ee-a41259ce93aa name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.025218582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee393334499e1ab20e283d3fc33ceafcddead35b0913ec24296acd1206da44c9,PodSandboxId:507f1d41c992014e92ec3c965547a80d8df25cd736cb04a79043acb1cb00e349,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1716214402433009369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-kxc4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73f0427c-c9c3-423c-8f9e-853d1499d1f4,},Annotations:map[string]string{io.kubernetes.container.hash: 9756d59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aaea3d71288be92fe6a988274638d5dd30925f5f9396d41cb040c737db894e2,PodSandboxId:51aceef7b6319a97a9db11d0ec53a7674ed9fb76c0a142b1a7e7db41262608e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716214396385462356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 15e7a072-7463-46e8-bace-b22469bbaccc,},Annotations:map[string]string{io.kubernetes.container.hash: 7a388eb7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7277d62e4bd756d0d1dd445904efbebb055c4c0d29935545ccc86d523520cfda,PodSandboxId:51aceef7b6319a97a9db11d0ec53a7674ed9fb76c0a142b1a7e7db41262608e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716214395259870191,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 15e7a072-7463-46e8-bace-b22469bbaccc,},Annotations:map[string]string{io.kubernetes.container.hash: 7a388eb7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52861cf00bda5e22c15196d669db511a91ed33d3ab74d2d6274ef82c304532c5,PodSandboxId:d3d6aac6fc6508c99ac8db89bb08215650bd73131696620c11aeb6afb1a5d28c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1716214395267084147,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-526p8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b833eee-e2b6-45
c5-b0df-860dbde5c870,},Annotations:map[string]string{io.kubernetes.container.hash: 4c3f7d0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:110d35b5cb2ea2d5719f54eb6213ab35b262e6475fdf4c6e05e52afa2ab4be5e,PodSandboxId:45b406a8ed0295f25a3c6019708e322ae114624460d6f1f8f76665c24bde8871,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1716214388994715642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-051001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eadceead4ea84924ddd04a3
c09dbf4c0,},Annotations:map[string]string{io.kubernetes.container.hash: e9909b97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31d64d4fede758e051d73c624603979dbc015ca07859bccabc802088d4fd1a0f,PodSandboxId:31dce039c96e4df0a0aab9b483dea96e9220ef976f7320fc36503a515f07eb00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1716214388966936965,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-051001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7fbcc0432346ed7e25e712fcd7a47d,},Annotations:map[string]string
{io.kubernetes.container.hash: 97f7c454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59181a48f2e103d1976d8e0a4e6721a347f8973777479729f938de2c89e8ee3,PodSandboxId:dd0762cfd562e904e963c39701f82740a15739e0dadcb90b319181ab45fbb63f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1716214388976735602,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-051001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 296dc699c271732f799e6fbc1b8f0a53,},Annotations:map[string]string{io.kuberne
tes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fbf0da1a66c9bd5899503b8b98ebcbd25e1db94f7054e227bcdd993e630f0e7,PodSandboxId:78e73dbf49f5cc83db06664bd068bd6017b397c868c2526b0250ff04a644adcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1716214388906990781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-051001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b9e990ce65cbd297883064e32925781,},Annotations:map[string]s
tring{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=330f9042-48aa-4247-b0ee-a41259ce93aa name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.058677023Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dff88f37-9b0d-415c-a45f-3f5731e75c86 name=/runtime.v1.RuntimeService/Version
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.058748031Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dff88f37-9b0d-415c-a45f-3f5731e75c86 name=/runtime.v1.RuntimeService/Version
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.059870418Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1d1473b-3a9f-4b4e-825e-ada9f2a3c958 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.060547491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716214408060520773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1d1473b-3a9f-4b4e-825e-ada9f2a3c958 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.061125669Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90efacaa-bf22-41ca-8c7f-c48ef8a79af3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.061175223Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90efacaa-bf22-41ca-8c7f-c48ef8a79af3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.061379419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee393334499e1ab20e283d3fc33ceafcddead35b0913ec24296acd1206da44c9,PodSandboxId:507f1d41c992014e92ec3c965547a80d8df25cd736cb04a79043acb1cb00e349,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1716214402433009369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-kxc4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73f0427c-c9c3-423c-8f9e-853d1499d1f4,},Annotations:map[string]string{io.kubernetes.container.hash: 9756d59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aaea3d71288be92fe6a988274638d5dd30925f5f9396d41cb040c737db894e2,PodSandboxId:51aceef7b6319a97a9db11d0ec53a7674ed9fb76c0a142b1a7e7db41262608e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716214396385462356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 15e7a072-7463-46e8-bace-b22469bbaccc,},Annotations:map[string]string{io.kubernetes.container.hash: 7a388eb7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7277d62e4bd756d0d1dd445904efbebb055c4c0d29935545ccc86d523520cfda,PodSandboxId:51aceef7b6319a97a9db11d0ec53a7674ed9fb76c0a142b1a7e7db41262608e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716214395259870191,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 15e7a072-7463-46e8-bace-b22469bbaccc,},Annotations:map[string]string{io.kubernetes.container.hash: 7a388eb7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52861cf00bda5e22c15196d669db511a91ed33d3ab74d2d6274ef82c304532c5,PodSandboxId:d3d6aac6fc6508c99ac8db89bb08215650bd73131696620c11aeb6afb1a5d28c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1716214395267084147,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-526p8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b833eee-e2b6-45
c5-b0df-860dbde5c870,},Annotations:map[string]string{io.kubernetes.container.hash: 4c3f7d0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:110d35b5cb2ea2d5719f54eb6213ab35b262e6475fdf4c6e05e52afa2ab4be5e,PodSandboxId:45b406a8ed0295f25a3c6019708e322ae114624460d6f1f8f76665c24bde8871,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1716214388994715642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-051001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eadceead4ea84924ddd04a3
c09dbf4c0,},Annotations:map[string]string{io.kubernetes.container.hash: e9909b97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31d64d4fede758e051d73c624603979dbc015ca07859bccabc802088d4fd1a0f,PodSandboxId:31dce039c96e4df0a0aab9b483dea96e9220ef976f7320fc36503a515f07eb00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1716214388966936965,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-051001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7fbcc0432346ed7e25e712fcd7a47d,},Annotations:map[string]string
{io.kubernetes.container.hash: 97f7c454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59181a48f2e103d1976d8e0a4e6721a347f8973777479729f938de2c89e8ee3,PodSandboxId:dd0762cfd562e904e963c39701f82740a15739e0dadcb90b319181ab45fbb63f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1716214388976735602,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-051001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 296dc699c271732f799e6fbc1b8f0a53,},Annotations:map[string]string{io.kuberne
tes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fbf0da1a66c9bd5899503b8b98ebcbd25e1db94f7054e227bcdd993e630f0e7,PodSandboxId:78e73dbf49f5cc83db06664bd068bd6017b397c868c2526b0250ff04a644adcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1716214388906990781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-051001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b9e990ce65cbd297883064e32925781,},Annotations:map[string]s
tring{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90efacaa-bf22-41ca-8c7f-c48ef8a79af3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.092167641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b77d9300-a146-406c-b96b-f52ff3da4148 name=/runtime.v1.RuntimeService/Version
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.092491323Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b77d9300-a146-406c-b96b-f52ff3da4148 name=/runtime.v1.RuntimeService/Version
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.093795169Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19a58ee4-f099-45c2-b7be-d12b7d2b1ec6 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.094210789Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716214408094188812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19a58ee4-f099-45c2-b7be-d12b7d2b1ec6 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.094926665Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b9bdec1-3331-456f-92d5-5400f4dbe16a name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.094978110Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b9bdec1-3331-456f-92d5-5400f4dbe16a name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:13:28 test-preload-051001 crio[691]: time="2024-05-20 14:13:28.095175064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee393334499e1ab20e283d3fc33ceafcddead35b0913ec24296acd1206da44c9,PodSandboxId:507f1d41c992014e92ec3c965547a80d8df25cd736cb04a79043acb1cb00e349,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1716214402433009369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-kxc4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73f0427c-c9c3-423c-8f9e-853d1499d1f4,},Annotations:map[string]string{io.kubernetes.container.hash: 9756d59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aaea3d71288be92fe6a988274638d5dd30925f5f9396d41cb040c737db894e2,PodSandboxId:51aceef7b6319a97a9db11d0ec53a7674ed9fb76c0a142b1a7e7db41262608e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716214396385462356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 15e7a072-7463-46e8-bace-b22469bbaccc,},Annotations:map[string]string{io.kubernetes.container.hash: 7a388eb7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7277d62e4bd756d0d1dd445904efbebb055c4c0d29935545ccc86d523520cfda,PodSandboxId:51aceef7b6319a97a9db11d0ec53a7674ed9fb76c0a142b1a7e7db41262608e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716214395259870191,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 15e7a072-7463-46e8-bace-b22469bbaccc,},Annotations:map[string]string{io.kubernetes.container.hash: 7a388eb7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52861cf00bda5e22c15196d669db511a91ed33d3ab74d2d6274ef82c304532c5,PodSandboxId:d3d6aac6fc6508c99ac8db89bb08215650bd73131696620c11aeb6afb1a5d28c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1716214395267084147,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-526p8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b833eee-e2b6-45
c5-b0df-860dbde5c870,},Annotations:map[string]string{io.kubernetes.container.hash: 4c3f7d0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:110d35b5cb2ea2d5719f54eb6213ab35b262e6475fdf4c6e05e52afa2ab4be5e,PodSandboxId:45b406a8ed0295f25a3c6019708e322ae114624460d6f1f8f76665c24bde8871,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1716214388994715642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-051001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eadceead4ea84924ddd04a3
c09dbf4c0,},Annotations:map[string]string{io.kubernetes.container.hash: e9909b97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31d64d4fede758e051d73c624603979dbc015ca07859bccabc802088d4fd1a0f,PodSandboxId:31dce039c96e4df0a0aab9b483dea96e9220ef976f7320fc36503a515f07eb00,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1716214388966936965,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-051001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7fbcc0432346ed7e25e712fcd7a47d,},Annotations:map[string]string
{io.kubernetes.container.hash: 97f7c454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59181a48f2e103d1976d8e0a4e6721a347f8973777479729f938de2c89e8ee3,PodSandboxId:dd0762cfd562e904e963c39701f82740a15739e0dadcb90b319181ab45fbb63f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1716214388976735602,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-051001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 296dc699c271732f799e6fbc1b8f0a53,},Annotations:map[string]string{io.kuberne
tes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fbf0da1a66c9bd5899503b8b98ebcbd25e1db94f7054e227bcdd993e630f0e7,PodSandboxId:78e73dbf49f5cc83db06664bd068bd6017b397c868c2526b0250ff04a644adcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1716214388906990781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-051001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b9e990ce65cbd297883064e32925781,},Annotations:map[string]s
tring{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b9bdec1-3331-456f-92d5-5400f4dbe16a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ee393334499e1       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   507f1d41c9920       coredns-6d4b75cb6d-kxc4t
	9aaea3d71288b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Running             storage-provisioner       3                   51aceef7b6319       storage-provisioner
	52861cf00bda5       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   d3d6aac6fc650       kube-proxy-526p8
	7277d62e4bd75       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Exited              storage-provisioner       2                   51aceef7b6319       storage-provisioner
	110d35b5cb2ea       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   45b406a8ed029       kube-apiserver-test-preload-051001
	e59181a48f2e1       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   dd0762cfd562e       kube-scheduler-test-preload-051001
	31d64d4fede75       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   31dce039c96e4       etcd-test-preload-051001
	8fbf0da1a66c9       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   78e73dbf49f5c       kube-controller-manager-test-preload-051001
	
	
	==> coredns [ee393334499e1ab20e283d3fc33ceafcddead35b0913ec24296acd1206da44c9] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:47696 - 43812 "HINFO IN 5350124253229756054.2685417101785371371. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011381471s
	
	
	==> describe nodes <==
	Name:               test-preload-051001
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-051001
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=test-preload-051001
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T14_11_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 14:11:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-051001
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 14:13:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 14:13:23 +0000   Mon, 20 May 2024 14:11:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 14:13:23 +0000   Mon, 20 May 2024 14:11:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 14:13:23 +0000   Mon, 20 May 2024 14:11:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 14:13:23 +0000   Mon, 20 May 2024 14:13:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    test-preload-051001
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8098174bde3427680b4831d325868cc
	  System UUID:                a8098174-bde3-4276-80b4-831d325868cc
	  Boot ID:                    29c28081-2abd-486e-b4b3-9686311f9aa0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-kxc4t                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     80s
	  kube-system                 etcd-test-preload-051001                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kube-apiserver-test-preload-051001             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-controller-manager-test-preload-051001    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-proxy-526p8                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-scheduler-test-preload-051001             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  Starting                 78s                kube-proxy       
	  Normal  Starting                 93s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  93s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  93s                kubelet          Node test-preload-051001 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s                kubelet          Node test-preload-051001 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s                kubelet          Node test-preload-051001 status is now: NodeHasSufficientPID
	  Normal  NodeReady                83s                kubelet          Node test-preload-051001 status is now: NodeReady
	  Normal  RegisteredNode           81s                node-controller  Node test-preload-051001 event: Registered Node test-preload-051001 in Controller
	  Normal  Starting                 20s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node test-preload-051001 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node test-preload-051001 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node test-preload-051001 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node test-preload-051001 event: Registered Node test-preload-051001 in Controller
	
	
	==> dmesg <==
	[May20 14:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051481] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037594] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.422459] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.828104] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.532511] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.174259] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.058045] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050527] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.153197] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.142404] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.263239] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[May20 14:13] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.058445] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.727234] systemd-fstab-generator[1076]: Ignoring "noauto" option for root device
	[  +7.179043] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.905198] systemd-fstab-generator[1715]: Ignoring "noauto" option for root device
	[  +6.140390] kauditd_printk_skb: 58 callbacks suppressed
	
	
	==> etcd [31d64d4fede758e051d73c624603979dbc015ca07859bccabc802088d4fd1a0f] <==
	{"level":"info","ts":"2024-05-20T14:13:09.382Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"c66b2a9605a64cb6","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-05-20T14:13:09.384Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-05-20T14:13:09.384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 switched to configuration voters=(14297568265846017206)"}
	{"level":"info","ts":"2024-05-20T14:13:09.384Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8f5341249654324","local-member-id":"c66b2a9605a64cb6","added-peer-id":"c66b2a9605a64cb6","added-peer-peer-urls":["https://192.168.39.245:2380"]}
	{"level":"info","ts":"2024-05-20T14:13:09.385Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8f5341249654324","local-member-id":"c66b2a9605a64cb6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T14:13:09.385Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T14:13:09.387Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T14:13:09.391Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.245:2380"}
	{"level":"info","ts":"2024-05-20T14:13:09.395Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.245:2380"}
	{"level":"info","ts":"2024-05-20T14:13:09.394Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c66b2a9605a64cb6","initial-advertise-peer-urls":["https://192.168.39.245:2380"],"listen-peer-urls":["https://192.168.39.245:2380"],"advertise-client-urls":["https://192.168.39.245:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.245:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T14:13:09.394Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T14:13:11.131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T14:13:11.131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T14:13:11.131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 received MsgPreVoteResp from c66b2a9605a64cb6 at term 2"}
	{"level":"info","ts":"2024-05-20T14:13:11.131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T14:13:11.131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 received MsgVoteResp from c66b2a9605a64cb6 at term 3"}
	{"level":"info","ts":"2024-05-20T14:13:11.131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 became leader at term 3"}
	{"level":"info","ts":"2024-05-20T14:13:11.131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c66b2a9605a64cb6 elected leader c66b2a9605a64cb6 at term 3"}
	{"level":"info","ts":"2024-05-20T14:13:11.131Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"c66b2a9605a64cb6","local-member-attributes":"{Name:test-preload-051001 ClientURLs:[https://192.168.39.245:2379]}","request-path":"/0/members/c66b2a9605a64cb6/attributes","cluster-id":"8f5341249654324","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T14:13:11.132Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T14:13:11.133Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T14:13:11.134Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T14:13:11.135Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.245:2379"}
	{"level":"info","ts":"2024-05-20T14:13:11.135Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T14:13:11.135Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 14:13:28 up 0 min,  0 users,  load average: 0.57, 0.15, 0.05
	Linux test-preload-051001 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [110d35b5cb2ea2d5719f54eb6213ab35b262e6475fdf4c6e05e52afa2ab4be5e] <==
	I0520 14:13:13.503151       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0520 14:13:13.503308       1 establishing_controller.go:76] Starting EstablishingController
	I0520 14:13:13.503342       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0520 14:13:13.503357       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0520 14:13:13.503411       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0520 14:13:13.518479       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0520 14:13:13.518552       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0520 14:13:13.565554       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0520 14:13:13.567065       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 14:13:13.580629       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 14:13:13.581342       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0520 14:13:13.618923       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0520 14:13:13.660563       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0520 14:13:13.662599       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 14:13:13.664933       1 cache.go:39] Caches are synced for autoregister controller
	I0520 14:13:14.159132       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0520 14:13:14.467400       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 14:13:14.912933       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0520 14:13:14.928615       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0520 14:13:14.971083       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0520 14:13:14.989042       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 14:13:14.995825       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 14:13:15.511725       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0520 14:13:26.226165       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 14:13:26.330181       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8fbf0da1a66c9bd5899503b8b98ebcbd25e1db94f7054e227bcdd993e630f0e7] <==
	I0520 14:13:26.214056       1 shared_informer.go:262] Caches are synced for cronjob
	I0520 14:13:26.216078       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0520 14:13:26.219738       1 shared_informer.go:262] Caches are synced for namespace
	I0520 14:13:26.223323       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0520 14:13:26.223539       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0520 14:13:26.237751       1 shared_informer.go:262] Caches are synced for node
	I0520 14:13:26.237958       1 range_allocator.go:173] Starting range CIDR allocator
	I0520 14:13:26.238123       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0520 14:13:26.238191       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0520 14:13:26.242662       1 shared_informer.go:262] Caches are synced for attach detach
	I0520 14:13:26.254555       1 shared_informer.go:262] Caches are synced for HPA
	I0520 14:13:26.260132       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0520 14:13:26.269625       1 shared_informer.go:262] Caches are synced for persistent volume
	I0520 14:13:26.288290       1 shared_informer.go:262] Caches are synced for stateful set
	I0520 14:13:26.296328       1 shared_informer.go:262] Caches are synced for expand
	I0520 14:13:26.321603       1 shared_informer.go:262] Caches are synced for PV protection
	I0520 14:13:26.344825       1 shared_informer.go:262] Caches are synced for deployment
	I0520 14:13:26.346594       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0520 14:13:26.358404       1 shared_informer.go:262] Caches are synced for disruption
	I0520 14:13:26.358521       1 disruption.go:371] Sending events to api server.
	I0520 14:13:26.428612       1 shared_informer.go:262] Caches are synced for resource quota
	I0520 14:13:26.474370       1 shared_informer.go:262] Caches are synced for resource quota
	I0520 14:13:26.872781       1 shared_informer.go:262] Caches are synced for garbage collector
	I0520 14:13:26.872819       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0520 14:13:26.879303       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [52861cf00bda5e22c15196d669db511a91ed33d3ab74d2d6274ef82c304532c5] <==
	I0520 14:13:15.473595       1 node.go:163] Successfully retrieved node IP: 192.168.39.245
	I0520 14:13:15.473719       1 server_others.go:138] "Detected node IP" address="192.168.39.245"
	I0520 14:13:15.473791       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0520 14:13:15.502787       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0520 14:13:15.502819       1 server_others.go:206] "Using iptables Proxier"
	I0520 14:13:15.503290       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0520 14:13:15.503787       1 server.go:661] "Version info" version="v1.24.4"
	I0520 14:13:15.503823       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 14:13:15.504886       1 config.go:317] "Starting service config controller"
	I0520 14:13:15.505140       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0520 14:13:15.505229       1 config.go:226] "Starting endpoint slice config controller"
	I0520 14:13:15.505320       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0520 14:13:15.506230       1 config.go:444] "Starting node config controller"
	I0520 14:13:15.507995       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0520 14:13:15.605979       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0520 14:13:15.606008       1 shared_informer.go:262] Caches are synced for service config
	I0520 14:13:15.608368       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [e59181a48f2e103d1976d8e0a4e6721a347f8973777479729f938de2c89e8ee3] <==
	I0520 14:13:09.465535       1 serving.go:348] Generated self-signed cert in-memory
	W0520 14:13:13.526153       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 14:13:13.526297       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 14:13:13.526314       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 14:13:13.526324       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 14:13:13.622282       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0520 14:13:13.622335       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 14:13:13.636136       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0520 14:13:13.636386       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 14:13:13.636427       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 14:13:13.636467       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 14:13:13.736775       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 14:13:14 test-preload-051001 kubelet[1083]: I0520 14:13:14.215186    1083 topology_manager.go:200] "Topology Admit Handler"
	May 20 14:13:14 test-preload-051001 kubelet[1083]: I0520 14:13:14.217401    1083 topology_manager.go:200] "Topology Admit Handler"
	May 20 14:13:14 test-preload-051001 kubelet[1083]: I0520 14:13:14.218959    1083 topology_manager.go:200] "Topology Admit Handler"
	May 20 14:13:14 test-preload-051001 kubelet[1083]: I0520 14:13:14.220946    1083 topology_manager.go:200] "Topology Admit Handler"
	May 20 14:13:14 test-preload-051001 kubelet[1083]: E0520 14:13:14.221081    1083 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-kxc4t" podUID=73f0427c-c9c3-423c-8f9e-853d1499d1f4
	May 20 14:13:14 test-preload-051001 kubelet[1083]: I0520 14:13:14.281399    1083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxjkb\" (UniqueName: \"kubernetes.io/projected/0b833eee-e2b6-45c5-b0df-860dbde5c870-kube-api-access-pxjkb\") pod \"kube-proxy-526p8\" (UID: \"0b833eee-e2b6-45c5-b0df-860dbde5c870\") " pod="kube-system/kube-proxy-526p8"
	May 20 14:13:14 test-preload-051001 kubelet[1083]: I0520 14:13:14.281771    1083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/15e7a072-7463-46e8-bace-b22469bbaccc-tmp\") pod \"storage-provisioner\" (UID: \"15e7a072-7463-46e8-bace-b22469bbaccc\") " pod="kube-system/storage-provisioner"
	May 20 14:13:14 test-preload-051001 kubelet[1083]: I0520 14:13:14.281857    1083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b833eee-e2b6-45c5-b0df-860dbde5c870-kube-proxy\") pod \"kube-proxy-526p8\" (UID: \"0b833eee-e2b6-45c5-b0df-860dbde5c870\") " pod="kube-system/kube-proxy-526p8"
	May 20 14:13:14 test-preload-051001 kubelet[1083]: I0520 14:13:14.281916    1083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b833eee-e2b6-45c5-b0df-860dbde5c870-xtables-lock\") pod \"kube-proxy-526p8\" (UID: \"0b833eee-e2b6-45c5-b0df-860dbde5c870\") " pod="kube-system/kube-proxy-526p8"
	May 20 14:13:14 test-preload-051001 kubelet[1083]: I0520 14:13:14.282037    1083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b833eee-e2b6-45c5-b0df-860dbde5c870-lib-modules\") pod \"kube-proxy-526p8\" (UID: \"0b833eee-e2b6-45c5-b0df-860dbde5c870\") " pod="kube-system/kube-proxy-526p8"
	May 20 14:13:14 test-preload-051001 kubelet[1083]: I0520 14:13:14.282115    1083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73f0427c-c9c3-423c-8f9e-853d1499d1f4-config-volume\") pod \"coredns-6d4b75cb6d-kxc4t\" (UID: \"73f0427c-c9c3-423c-8f9e-853d1499d1f4\") " pod="kube-system/coredns-6d4b75cb6d-kxc4t"
	May 20 14:13:14 test-preload-051001 kubelet[1083]: I0520 14:13:14.282136    1083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5vl7\" (UniqueName: \"kubernetes.io/projected/73f0427c-c9c3-423c-8f9e-853d1499d1f4-kube-api-access-q5vl7\") pod \"coredns-6d4b75cb6d-kxc4t\" (UID: \"73f0427c-c9c3-423c-8f9e-853d1499d1f4\") " pod="kube-system/coredns-6d4b75cb6d-kxc4t"
	May 20 14:13:14 test-preload-051001 kubelet[1083]: I0520 14:13:14.282162    1083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm7tf\" (UniqueName: \"kubernetes.io/projected/15e7a072-7463-46e8-bace-b22469bbaccc-kube-api-access-pm7tf\") pod \"storage-provisioner\" (UID: \"15e7a072-7463-46e8-bace-b22469bbaccc\") " pod="kube-system/storage-provisioner"
	May 20 14:13:14 test-preload-051001 kubelet[1083]: I0520 14:13:14.282181    1083 reconciler.go:159] "Reconciler: start to sync state"
	May 20 14:13:14 test-preload-051001 kubelet[1083]: I0520 14:13:14.322993    1083 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=cce69124-39fe-4c7a-969f-f952a4b2ff66 path="/var/lib/kubelet/pods/cce69124-39fe-4c7a-969f-f952a4b2ff66/volumes"
	May 20 14:13:14 test-preload-051001 kubelet[1083]: E0520 14:13:14.386308    1083 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 20 14:13:14 test-preload-051001 kubelet[1083]: E0520 14:13:14.386470    1083 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/73f0427c-c9c3-423c-8f9e-853d1499d1f4-config-volume podName:73f0427c-c9c3-423c-8f9e-853d1499d1f4 nodeName:}" failed. No retries permitted until 2024-05-20 14:13:14.886432737 +0000 UTC m=+6.800447083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/73f0427c-c9c3-423c-8f9e-853d1499d1f4-config-volume") pod "coredns-6d4b75cb6d-kxc4t" (UID: "73f0427c-c9c3-423c-8f9e-853d1499d1f4") : object "kube-system"/"coredns" not registered
	May 20 14:13:14 test-preload-051001 kubelet[1083]: E0520 14:13:14.890225    1083 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 20 14:13:14 test-preload-051001 kubelet[1083]: E0520 14:13:14.890350    1083 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/73f0427c-c9c3-423c-8f9e-853d1499d1f4-config-volume podName:73f0427c-c9c3-423c-8f9e-853d1499d1f4 nodeName:}" failed. No retries permitted until 2024-05-20 14:13:15.890328951 +0000 UTC m=+7.804343297 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/73f0427c-c9c3-423c-8f9e-853d1499d1f4-config-volume") pod "coredns-6d4b75cb6d-kxc4t" (UID: "73f0427c-c9c3-423c-8f9e-853d1499d1f4") : object "kube-system"/"coredns" not registered
	May 20 14:13:15 test-preload-051001 kubelet[1083]: E0520 14:13:15.896793    1083 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 20 14:13:15 test-preload-051001 kubelet[1083]: E0520 14:13:15.896925    1083 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/73f0427c-c9c3-423c-8f9e-853d1499d1f4-config-volume podName:73f0427c-c9c3-423c-8f9e-853d1499d1f4 nodeName:}" failed. No retries permitted until 2024-05-20 14:13:17.896908461 +0000 UTC m=+9.810922809 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/73f0427c-c9c3-423c-8f9e-853d1499d1f4-config-volume") pod "coredns-6d4b75cb6d-kxc4t" (UID: "73f0427c-c9c3-423c-8f9e-853d1499d1f4") : object "kube-system"/"coredns" not registered
	May 20 14:13:16 test-preload-051001 kubelet[1083]: E0520 14:13:16.314382    1083 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-kxc4t" podUID=73f0427c-c9c3-423c-8f9e-853d1499d1f4
	May 20 14:13:16 test-preload-051001 kubelet[1083]: I0520 14:13:16.374551    1083 scope.go:110] "RemoveContainer" containerID="7277d62e4bd756d0d1dd445904efbebb055c4c0d29935545ccc86d523520cfda"
	May 20 14:13:17 test-preload-051001 kubelet[1083]: E0520 14:13:17.914537    1083 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 20 14:13:17 test-preload-051001 kubelet[1083]: E0520 14:13:17.914628    1083 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/73f0427c-c9c3-423c-8f9e-853d1499d1f4-config-volume podName:73f0427c-c9c3-423c-8f9e-853d1499d1f4 nodeName:}" failed. No retries permitted until 2024-05-20 14:13:21.914613007 +0000 UTC m=+13.828627368 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/73f0427c-c9c3-423c-8f9e-853d1499d1f4-config-volume") pod "coredns-6d4b75cb6d-kxc4t" (UID: "73f0427c-c9c3-423c-8f9e-853d1499d1f4") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [7277d62e4bd756d0d1dd445904efbebb055c4c0d29935545ccc86d523520cfda] <==
	I0520 14:13:15.372557       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0520 14:13:15.375592       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [9aaea3d71288be92fe6a988274638d5dd30925f5f9396d41cb040c737db894e2] <==
	I0520 14:13:16.546003       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 14:13:16.560045       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 14:13:16.562884       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-051001 -n test-preload-051001
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-051001 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-051001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-051001
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-051001: (1.126653086s)
--- FAIL: TestPreload (175.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (453.9s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-366203 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-366203 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m55.738531109s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-366203] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18929
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-366203" primary control-plane node in "kubernetes-upgrade-366203" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 14:16:28.312180  648893 out.go:291] Setting OutFile to fd 1 ...
	I0520 14:16:28.313120  648893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 14:16:28.313131  648893 out.go:304] Setting ErrFile to fd 2...
	I0520 14:16:28.313136  648893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 14:16:28.313349  648893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 14:16:28.313924  648893 out.go:298] Setting JSON to false
	I0520 14:16:28.314861  648893 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14328,"bootTime":1716200260,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 14:16:28.314927  648893 start.go:139] virtualization: kvm guest
	I0520 14:16:28.318220  648893 out.go:177] * [kubernetes-upgrade-366203] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 14:16:28.320602  648893 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 14:16:28.323040  648893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 14:16:28.320650  648893 notify.go:220] Checking for updates...
	I0520 14:16:28.327319  648893 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 14:16:28.329967  648893 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 14:16:28.332215  648893 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 14:16:28.334526  648893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 14:16:28.337277  648893 config.go:182] Loaded profile config "NoKubernetes-903699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:16:28.337433  648893 config.go:182] Loaded profile config "offline-crio-866828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:16:28.337532  648893 config.go:182] Loaded profile config "running-upgrade-016464": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0520 14:16:28.337638  648893 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 14:16:28.375807  648893 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 14:16:28.378218  648893 start.go:297] selected driver: kvm2
	I0520 14:16:28.378246  648893 start.go:901] validating driver "kvm2" against <nil>
	I0520 14:16:28.378260  648893 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 14:16:28.379035  648893 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 14:16:28.379158  648893 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 14:16:28.396002  648893 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 14:16:28.396081  648893 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 14:16:28.396316  648893 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 14:16:28.396346  648893 cni.go:84] Creating CNI manager for ""
	I0520 14:16:28.396357  648893 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 14:16:28.396368  648893 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 14:16:28.396424  648893 start.go:340] cluster config:
	{Name:kubernetes-upgrade-366203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-366203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:16:28.396543  648893 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 14:16:28.399315  648893 out.go:177] * Starting "kubernetes-upgrade-366203" primary control-plane node in "kubernetes-upgrade-366203" cluster
	I0520 14:16:28.401534  648893 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 14:16:28.401588  648893 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0520 14:16:28.401606  648893 cache.go:56] Caching tarball of preloaded images
	I0520 14:16:28.401696  648893 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 14:16:28.401709  648893 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0520 14:16:28.401806  648893 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/config.json ...
	I0520 14:16:28.401831  648893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/config.json: {Name:mk0a246615df3ea7005f2596bed4d3c0b6e441ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:16:28.401990  648893 start.go:360] acquireMachinesLock for kubernetes-upgrade-366203: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 14:16:54.825690  648893 start.go:364] duration metric: took 26.42365463s to acquireMachinesLock for "kubernetes-upgrade-366203"
	I0520 14:16:54.825844  648893 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-366203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-366203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 14:16:54.825959  648893 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 14:16:54.829010  648893 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 14:16:54.829218  648893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:16:54.829300  648893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:16:54.847244  648893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42505
	I0520 14:16:54.847718  648893 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:16:54.848411  648893 main.go:141] libmachine: Using API Version  1
	I0520 14:16:54.848437  648893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:16:54.848837  648893 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:16:54.849091  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetMachineName
	I0520 14:16:54.849290  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:16:54.849453  648893 start.go:159] libmachine.API.Create for "kubernetes-upgrade-366203" (driver="kvm2")
	I0520 14:16:54.849494  648893 client.go:168] LocalClient.Create starting
	I0520 14:16:54.849527  648893 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem
	I0520 14:16:54.849562  648893 main.go:141] libmachine: Decoding PEM data...
	I0520 14:16:54.849577  648893 main.go:141] libmachine: Parsing certificate...
	I0520 14:16:54.849629  648893 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem
	I0520 14:16:54.849646  648893 main.go:141] libmachine: Decoding PEM data...
	I0520 14:16:54.849655  648893 main.go:141] libmachine: Parsing certificate...
	I0520 14:16:54.849668  648893 main.go:141] libmachine: Running pre-create checks...
	I0520 14:16:54.849677  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .PreCreateCheck
	I0520 14:16:54.850002  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetConfigRaw
	I0520 14:16:54.850494  648893 main.go:141] libmachine: Creating machine...
	I0520 14:16:54.850513  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .Create
	I0520 14:16:54.850670  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Creating KVM machine...
	I0520 14:16:54.852175  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found existing default KVM network
	I0520 14:16:54.853525  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:16:54.853366  649366 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000254000}
	I0520 14:16:54.853559  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | created network xml: 
	I0520 14:16:54.853574  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | <network>
	I0520 14:16:54.853590  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG |   <name>mk-kubernetes-upgrade-366203</name>
	I0520 14:16:54.853602  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG |   <dns enable='no'/>
	I0520 14:16:54.853607  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG |   
	I0520 14:16:54.853615  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 14:16:54.853622  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG |     <dhcp>
	I0520 14:16:54.853628  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 14:16:54.853635  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG |     </dhcp>
	I0520 14:16:54.853643  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG |   </ip>
	I0520 14:16:54.853650  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG |   
	I0520 14:16:54.853655  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | </network>
	I0520 14:16:54.853662  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | 
	I0520 14:16:54.860116  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | trying to create private KVM network mk-kubernetes-upgrade-366203 192.168.39.0/24...
	I0520 14:16:54.937530  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | private KVM network mk-kubernetes-upgrade-366203 192.168.39.0/24 created
	I0520 14:16:54.937571  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:16:54.937464  649366 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 14:16:54.937585  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Setting up store path in /home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203 ...
	I0520 14:16:54.937604  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Building disk image from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 14:16:54.937628  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Downloading /home/jenkins/minikube-integration/18929-602525/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 14:16:55.181781  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:16:55.181626  649366 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa...
	I0520 14:16:55.254616  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:16:55.254474  649366 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/kubernetes-upgrade-366203.rawdisk...
	I0520 14:16:55.254646  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | Writing magic tar header
	I0520 14:16:55.254660  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | Writing SSH key tar header
	I0520 14:16:55.254669  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:16:55.254617  649366 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203 ...
	I0520 14:16:55.254769  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203
	I0520 14:16:55.254796  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203 (perms=drwx------)
	I0520 14:16:55.254806  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines
	I0520 14:16:55.254817  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 14:16:55.254826  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525
	I0520 14:16:55.254833  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 14:16:55.254845  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | Checking permissions on dir: /home/jenkins
	I0520 14:16:55.254858  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | Checking permissions on dir: /home
	I0520 14:16:55.254867  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | Skipping /home - not owner
	I0520 14:16:55.254908  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines (perms=drwxr-xr-x)
	I0520 14:16:55.254935  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube (perms=drwxr-xr-x)
	I0520 14:16:55.254949  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525 (perms=drwxrwxr-x)
	I0520 14:16:55.254960  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 14:16:55.254972  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 14:16:55.254983  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Creating domain...
	I0520 14:16:55.256126  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) define libvirt domain using xml: 
	I0520 14:16:55.256148  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) <domain type='kvm'>
	I0520 14:16:55.256164  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)   <name>kubernetes-upgrade-366203</name>
	I0520 14:16:55.256180  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)   <memory unit='MiB'>2200</memory>
	I0520 14:16:55.256189  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)   <vcpu>2</vcpu>
	I0520 14:16:55.256196  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)   <features>
	I0520 14:16:55.256204  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     <acpi/>
	I0520 14:16:55.256211  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     <apic/>
	I0520 14:16:55.256219  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     <pae/>
	I0520 14:16:55.256232  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     
	I0520 14:16:55.256241  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)   </features>
	I0520 14:16:55.256250  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)   <cpu mode='host-passthrough'>
	I0520 14:16:55.256270  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)   
	I0520 14:16:55.256286  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)   </cpu>
	I0520 14:16:55.256295  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)   <os>
	I0520 14:16:55.256302  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     <type>hvm</type>
	I0520 14:16:55.256311  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     <boot dev='cdrom'/>
	I0520 14:16:55.256317  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     <boot dev='hd'/>
	I0520 14:16:55.256322  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     <bootmenu enable='no'/>
	I0520 14:16:55.256329  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)   </os>
	I0520 14:16:55.256335  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)   <devices>
	I0520 14:16:55.256340  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     <disk type='file' device='cdrom'>
	I0520 14:16:55.256356  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/boot2docker.iso'/>
	I0520 14:16:55.256372  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)       <target dev='hdc' bus='scsi'/>
	I0520 14:16:55.256384  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)       <readonly/>
	I0520 14:16:55.256391  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     </disk>
	I0520 14:16:55.256404  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     <disk type='file' device='disk'>
	I0520 14:16:55.256414  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 14:16:55.256423  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/kubernetes-upgrade-366203.rawdisk'/>
	I0520 14:16:55.256430  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)       <target dev='hda' bus='virtio'/>
	I0520 14:16:55.256436  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     </disk>
	I0520 14:16:55.256444  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     <interface type='network'>
	I0520 14:16:55.256453  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)       <source network='mk-kubernetes-upgrade-366203'/>
	I0520 14:16:55.256469  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)       <model type='virtio'/>
	I0520 14:16:55.256482  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     </interface>
	I0520 14:16:55.256492  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     <interface type='network'>
	I0520 14:16:55.256500  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)       <source network='default'/>
	I0520 14:16:55.256507  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)       <model type='virtio'/>
	I0520 14:16:55.256516  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     </interface>
	I0520 14:16:55.256522  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     <serial type='pty'>
	I0520 14:16:55.256528  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)       <target port='0'/>
	I0520 14:16:55.256540  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     </serial>
	I0520 14:16:55.256553  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     <console type='pty'>
	I0520 14:16:55.256566  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)       <target type='serial' port='0'/>
	I0520 14:16:55.256577  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     </console>
	I0520 14:16:55.256588  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     <rng model='virtio'>
	I0520 14:16:55.256600  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)       <backend model='random'>/dev/random</backend>
	I0520 14:16:55.256606  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     </rng>
	I0520 14:16:55.256611  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     
	I0520 14:16:55.256616  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)     
	I0520 14:16:55.256621  648893 main.go:141] libmachine: (kubernetes-upgrade-366203)   </devices>
	I0520 14:16:55.256630  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) </domain>
	I0520 14:16:55.256644  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) 
	I0520 14:16:55.261855  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:6c:3c:29 in network default
	I0520 14:16:55.262459  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Ensuring networks are active...
	I0520 14:16:55.262481  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:16:55.263234  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Ensuring network default is active
	I0520 14:16:55.263601  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Ensuring network mk-kubernetes-upgrade-366203 is active
	I0520 14:16:55.264114  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Getting domain xml...
	I0520 14:16:55.264833  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Creating domain...
	I0520 14:16:56.500502  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Waiting to get IP...
	I0520 14:16:56.501527  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:16:56.502000  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | unable to find current IP address of domain kubernetes-upgrade-366203 in network mk-kubernetes-upgrade-366203
	I0520 14:16:56.502027  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:16:56.501948  649366 retry.go:31] will retry after 250.626704ms: waiting for machine to come up
	I0520 14:16:56.754454  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:16:56.754950  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | unable to find current IP address of domain kubernetes-upgrade-366203 in network mk-kubernetes-upgrade-366203
	I0520 14:16:56.754982  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:16:56.754901  649366 retry.go:31] will retry after 256.718764ms: waiting for machine to come up
	I0520 14:16:57.013490  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:16:57.013938  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | unable to find current IP address of domain kubernetes-upgrade-366203 in network mk-kubernetes-upgrade-366203
	I0520 14:16:57.013964  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:16:57.013884  649366 retry.go:31] will retry after 303.592004ms: waiting for machine to come up
	I0520 14:16:57.319426  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:16:57.319896  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | unable to find current IP address of domain kubernetes-upgrade-366203 in network mk-kubernetes-upgrade-366203
	I0520 14:16:57.319924  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:16:57.319843  649366 retry.go:31] will retry after 547.543441ms: waiting for machine to come up
	I0520 14:16:57.868480  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:16:57.868928  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | unable to find current IP address of domain kubernetes-upgrade-366203 in network mk-kubernetes-upgrade-366203
	I0520 14:16:57.868953  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:16:57.868871  649366 retry.go:31] will retry after 666.277684ms: waiting for machine to come up
	I0520 14:16:58.536740  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:16:58.537290  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | unable to find current IP address of domain kubernetes-upgrade-366203 in network mk-kubernetes-upgrade-366203
	I0520 14:16:58.537321  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:16:58.537225  649366 retry.go:31] will retry after 587.242639ms: waiting for machine to come up
	I0520 14:16:59.126215  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:16:59.126769  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | unable to find current IP address of domain kubernetes-upgrade-366203 in network mk-kubernetes-upgrade-366203
	I0520 14:16:59.126806  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:16:59.126701  649366 retry.go:31] will retry after 1.05075509s: waiting for machine to come up
	I0520 14:17:00.179476  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:00.179997  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | unable to find current IP address of domain kubernetes-upgrade-366203 in network mk-kubernetes-upgrade-366203
	I0520 14:17:00.180030  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:17:00.179973  649366 retry.go:31] will retry after 896.801465ms: waiting for machine to come up
	I0520 14:17:01.078332  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:01.079022  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | unable to find current IP address of domain kubernetes-upgrade-366203 in network mk-kubernetes-upgrade-366203
	I0520 14:17:01.079059  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:17:01.078915  649366 retry.go:31] will retry after 1.144281567s: waiting for machine to come up
	I0520 14:17:02.225004  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:02.225669  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | unable to find current IP address of domain kubernetes-upgrade-366203 in network mk-kubernetes-upgrade-366203
	I0520 14:17:02.225694  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:17:02.225592  649366 retry.go:31] will retry after 2.221318963s: waiting for machine to come up
	I0520 14:17:04.448791  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:04.449382  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | unable to find current IP address of domain kubernetes-upgrade-366203 in network mk-kubernetes-upgrade-366203
	I0520 14:17:04.449413  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:17:04.449332  649366 retry.go:31] will retry after 2.359920074s: waiting for machine to come up
	I0520 14:17:06.811398  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:06.811912  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | unable to find current IP address of domain kubernetes-upgrade-366203 in network mk-kubernetes-upgrade-366203
	I0520 14:17:06.811935  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:17:06.811858  649366 retry.go:31] will retry after 2.716260205s: waiting for machine to come up
	I0520 14:17:09.530568  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:09.530975  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | unable to find current IP address of domain kubernetes-upgrade-366203 in network mk-kubernetes-upgrade-366203
	I0520 14:17:09.530995  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:17:09.530950  649366 retry.go:31] will retry after 2.797780859s: waiting for machine to come up
	I0520 14:17:12.332470  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:12.332967  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | unable to find current IP address of domain kubernetes-upgrade-366203 in network mk-kubernetes-upgrade-366203
	I0520 14:17:12.332994  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | I0520 14:17:12.332905  649366 retry.go:31] will retry after 3.869658775s: waiting for machine to come up
	I0520 14:17:16.205101  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.205632  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Found IP for machine: 192.168.39.196
	I0520 14:17:16.205662  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Reserving static IP address...
	I0520 14:17:16.205690  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has current primary IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.206143  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-366203", mac: "52:54:00:83:db:54", ip: "192.168.39.196"} in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.297998  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | Getting to WaitForSSH function...
	I0520 14:17:16.298049  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Reserved static IP address: 192.168.39.196
	I0520 14:17:16.298061  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Waiting for SSH to be available...
	I0520 14:17:16.301373  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.301780  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:minikube Clientid:01:52:54:00:83:db:54}
	I0520 14:17:16.301810  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.302027  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | Using SSH client type: external
	I0520 14:17:16.302059  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | Using SSH private key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa (-rw-------)
	I0520 14:17:16.302097  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 14:17:16.302108  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | About to run SSH command:
	I0520 14:17:16.302160  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | exit 0
	I0520 14:17:16.429307  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | SSH cmd err, output: <nil>: 
	I0520 14:17:16.429616  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) KVM machine creation complete!
	I0520 14:17:16.429994  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetConfigRaw
	I0520 14:17:16.430605  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:17:16.430888  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:17:16.431093  648893 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 14:17:16.431112  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetState
	I0520 14:17:16.432558  648893 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 14:17:16.432594  648893 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 14:17:16.432603  648893 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 14:17:16.432615  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:17:16.435625  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.436117  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:17:16.436149  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.436327  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:17:16.436545  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:17:16.436731  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:17:16.436931  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:17:16.437117  648893 main.go:141] libmachine: Using SSH client type: native
	I0520 14:17:16.437390  648893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0520 14:17:16.437406  648893 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 14:17:16.561329  648893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 14:17:16.561387  648893 main.go:141] libmachine: Detecting the provisioner...
	I0520 14:17:16.561399  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:17:16.564631  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.565135  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:17:16.565176  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.565431  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:17:16.565685  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:17:16.565914  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:17:16.566094  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:17:16.566312  648893 main.go:141] libmachine: Using SSH client type: native
	I0520 14:17:16.566500  648893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0520 14:17:16.566513  648893 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 14:17:16.685990  648893 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 14:17:16.686077  648893 main.go:141] libmachine: found compatible host: buildroot
	I0520 14:17:16.686084  648893 main.go:141] libmachine: Provisioning with buildroot...
	I0520 14:17:16.686092  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetMachineName
	I0520 14:17:16.686378  648893 buildroot.go:166] provisioning hostname "kubernetes-upgrade-366203"
	I0520 14:17:16.686410  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetMachineName
	I0520 14:17:16.686610  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:17:16.689380  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.689763  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:17:16.689798  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.689962  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:17:16.690161  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:17:16.690358  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:17:16.690515  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:17:16.690731  648893 main.go:141] libmachine: Using SSH client type: native
	I0520 14:17:16.690946  648893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0520 14:17:16.690980  648893 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-366203 && echo "kubernetes-upgrade-366203" | sudo tee /etc/hostname
	I0520 14:17:16.819642  648893 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-366203
	
	I0520 14:17:16.819678  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:17:16.823718  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.824323  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:17:16.824357  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.824587  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:17:16.824809  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:17:16.824999  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:17:16.825174  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:17:16.825372  648893 main.go:141] libmachine: Using SSH client type: native
	I0520 14:17:16.825584  648893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0520 14:17:16.825603  648893 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-366203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-366203/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-366203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 14:17:16.946666  648893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 14:17:16.946708  648893 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 14:17:16.946736  648893 buildroot.go:174] setting up certificates
	I0520 14:17:16.946752  648893 provision.go:84] configureAuth start
	I0520 14:17:16.946772  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetMachineName
	I0520 14:17:16.947079  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetIP
	I0520 14:17:16.950151  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.950624  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:17:16.950655  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.950895  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:17:16.953120  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.953564  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:17:16.953597  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:16.953698  648893 provision.go:143] copyHostCerts
	I0520 14:17:16.953769  648893 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 14:17:16.953789  648893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 14:17:16.953845  648893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 14:17:16.953956  648893 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 14:17:16.953968  648893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 14:17:16.953990  648893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 14:17:16.954048  648893 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 14:17:16.954055  648893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 14:17:16.954071  648893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 14:17:16.954116  648893 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-366203 san=[127.0.0.1 192.168.39.196 kubernetes-upgrade-366203 localhost minikube]
	I0520 14:17:17.103745  648893 provision.go:177] copyRemoteCerts
	I0520 14:17:17.103810  648893 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 14:17:17.103842  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:17:17.107168  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.107521  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:17:17.107555  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.107769  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:17:17.107998  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:17:17.108161  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:17:17.108334  648893 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa Username:docker}
	I0520 14:17:17.195305  648893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 14:17:17.218576  648893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0520 14:17:17.241442  648893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 14:17:17.264715  648893 provision.go:87] duration metric: took 317.942829ms to configureAuth
	I0520 14:17:17.264751  648893 buildroot.go:189] setting minikube options for container-runtime
	I0520 14:17:17.264958  648893 config.go:182] Loaded profile config "kubernetes-upgrade-366203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 14:17:17.265061  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:17:17.267723  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.268021  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:17:17.268060  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.268194  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:17:17.268440  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:17:17.268602  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:17:17.268791  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:17:17.268978  648893 main.go:141] libmachine: Using SSH client type: native
	I0520 14:17:17.269182  648893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0520 14:17:17.269202  648893 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 14:17:17.563689  648893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 14:17:17.563723  648893 main.go:141] libmachine: Checking connection to Docker...
	I0520 14:17:17.563735  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetURL
	I0520 14:17:17.565410  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | Using libvirt version 6000000
	I0520 14:17:17.567949  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.568406  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:17:17.568442  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.568615  648893 main.go:141] libmachine: Docker is up and running!
	I0520 14:17:17.568637  648893 main.go:141] libmachine: Reticulating splines...
	I0520 14:17:17.568646  648893 client.go:171] duration metric: took 22.719142285s to LocalClient.Create
	I0520 14:17:17.568681  648893 start.go:167] duration metric: took 22.719228946s to libmachine.API.Create "kubernetes-upgrade-366203"
	I0520 14:17:17.568692  648893 start.go:293] postStartSetup for "kubernetes-upgrade-366203" (driver="kvm2")
	I0520 14:17:17.568702  648893 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 14:17:17.568721  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:17:17.569041  648893 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 14:17:17.569076  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:17:17.571597  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.571995  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:17:17.572030  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.572189  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:17:17.572390  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:17:17.572583  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:17:17.572743  648893 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa Username:docker}
	I0520 14:17:17.665692  648893 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 14:17:17.670308  648893 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 14:17:17.670338  648893 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 14:17:17.670414  648893 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 14:17:17.670495  648893 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 14:17:17.670611  648893 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 14:17:17.681138  648893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 14:17:17.706564  648893 start.go:296] duration metric: took 137.856562ms for postStartSetup
	I0520 14:17:17.706624  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetConfigRaw
	I0520 14:17:17.707299  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetIP
	I0520 14:17:17.710512  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.710959  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:17:17.710997  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.711278  648893 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/config.json ...
	I0520 14:17:17.711492  648893 start.go:128] duration metric: took 22.885518575s to createHost
	I0520 14:17:17.711521  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:17:17.714130  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.714556  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:17:17.714594  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.714824  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:17:17.715051  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:17:17.715257  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:17:17.715481  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:17:17.715679  648893 main.go:141] libmachine: Using SSH client type: native
	I0520 14:17:17.715895  648893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0520 14:17:17.715916  648893 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 14:17:17.834068  648893 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716214637.792567544
	
	I0520 14:17:17.834096  648893 fix.go:216] guest clock: 1716214637.792567544
	I0520 14:17:17.834106  648893 fix.go:229] Guest: 2024-05-20 14:17:17.792567544 +0000 UTC Remote: 2024-05-20 14:17:17.711505748 +0000 UTC m=+49.438683830 (delta=81.061796ms)
	I0520 14:17:17.834134  648893 fix.go:200] guest clock delta is within tolerance: 81.061796ms
	I0520 14:17:17.834142  648893 start.go:83] releasing machines lock for "kubernetes-upgrade-366203", held for 23.008357267s
	I0520 14:17:17.834174  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:17:17.834502  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetIP
	I0520 14:17:17.837976  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.838470  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:17:17.838506  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.838696  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:17:17.839418  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:17:17.839641  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:17:17.839737  648893 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 14:17:17.839800  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:17:17.840123  648893 ssh_runner.go:195] Run: cat /version.json
	I0520 14:17:17.840176  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:17:17.843013  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.843317  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.843385  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:17:17.843500  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.843666  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:17:17.843785  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:17:17.843813  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:17.844060  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:17:17.844075  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:17:17.844283  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:17:17.844288  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:17:17.844576  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:17:17.844570  648893 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa Username:docker}
	I0520 14:17:17.844768  648893 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa Username:docker}
	W0520 14:17:17.926635  648893 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 14:17:17.926744  648893 ssh_runner.go:195] Run: systemctl --version
	I0520 14:17:17.971290  648893 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 14:17:18.139638  648893 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 14:17:18.146178  648893 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 14:17:18.146253  648893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 14:17:18.161910  648893 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 14:17:18.161948  648893 start.go:494] detecting cgroup driver to use...
	I0520 14:17:18.162034  648893 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 14:17:18.186037  648893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 14:17:18.202729  648893 docker.go:217] disabling cri-docker service (if available) ...
	I0520 14:17:18.202796  648893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 14:17:18.218535  648893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 14:17:18.232489  648893 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 14:17:18.350526  648893 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 14:17:18.521140  648893 docker.go:233] disabling docker service ...
	I0520 14:17:18.521215  648893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 14:17:18.537017  648893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 14:17:18.552684  648893 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 14:17:18.691671  648893 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 14:17:18.830804  648893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 14:17:18.848264  648893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 14:17:18.868810  648893 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0520 14:17:18.868891  648893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:17:18.879672  648893 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 14:17:18.879752  648893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:17:18.892844  648893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:17:18.903677  648893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:17:18.914601  648893 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 14:17:18.925187  648893 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 14:17:18.934852  648893 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 14:17:18.934927  648893 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 14:17:18.949200  648893 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 14:17:18.959112  648893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:17:19.115774  648893 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 14:17:19.276373  648893 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 14:17:19.276465  648893 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 14:17:19.281669  648893 start.go:562] Will wait 60s for crictl version
	I0520 14:17:19.281742  648893 ssh_runner.go:195] Run: which crictl
	I0520 14:17:19.286281  648893 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 14:17:19.329688  648893 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 14:17:19.329772  648893 ssh_runner.go:195] Run: crio --version
	I0520 14:17:19.358196  648893 ssh_runner.go:195] Run: crio --version
	I0520 14:17:19.392235  648893 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0520 14:17:19.394583  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetIP
	I0520 14:17:19.399802  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:19.400556  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:17:09 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:17:19.400590  648893 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:17:19.400846  648893 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 14:17:19.405349  648893 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 14:17:19.418996  648893 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-366203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-366203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 14:17:19.419144  648893 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 14:17:19.419207  648893 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 14:17:19.456319  648893 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 14:17:19.456392  648893 ssh_runner.go:195] Run: which lz4
	I0520 14:17:19.460619  648893 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0520 14:17:19.466188  648893 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 14:17:19.466228  648893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0520 14:17:21.143793  648893 crio.go:462] duration metric: took 1.683220545s to copy over tarball
	I0520 14:17:21.143886  648893 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 14:17:24.011709  648893 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.867779957s)
	I0520 14:17:24.011749  648893 crio.go:469] duration metric: took 2.867923837s to extract the tarball
	I0520 14:17:24.011762  648893 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 14:17:24.055011  648893 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 14:17:24.105425  648893 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 14:17:24.105457  648893 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 14:17:24.105539  648893 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 14:17:24.105602  648893 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 14:17:24.105610  648893 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0520 14:17:24.105645  648893 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0520 14:17:24.105566  648893 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 14:17:24.105602  648893 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0520 14:17:24.105580  648893 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 14:17:24.105559  648893 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 14:17:24.107011  648893 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 14:17:24.107056  648893 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 14:17:24.107089  648893 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0520 14:17:24.107101  648893 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0520 14:17:24.107142  648893 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 14:17:24.107011  648893 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 14:17:24.107236  648893 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 14:17:24.107721  648893 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0520 14:17:24.350388  648893 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0520 14:17:24.361692  648893 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0520 14:17:24.367683  648893 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0520 14:17:24.368317  648893 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0520 14:17:24.376609  648893 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 14:17:24.384209  648893 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0520 14:17:24.428418  648893 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0520 14:17:24.428474  648893 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0520 14:17:24.428527  648893 ssh_runner.go:195] Run: which crictl
	I0520 14:17:24.448799  648893 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0520 14:17:24.513698  648893 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0520 14:17:24.513752  648893 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0520 14:17:24.513772  648893 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0520 14:17:24.513802  648893 ssh_runner.go:195] Run: which crictl
	I0520 14:17:24.513826  648893 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0520 14:17:24.513874  648893 ssh_runner.go:195] Run: which crictl
	I0520 14:17:24.513968  648893 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0520 14:17:24.513998  648893 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 14:17:24.514047  648893 ssh_runner.go:195] Run: which crictl
	I0520 14:17:24.530927  648893 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0520 14:17:24.530973  648893 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 14:17:24.530995  648893 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0520 14:17:24.531027  648893 ssh_runner.go:195] Run: which crictl
	I0520 14:17:24.531034  648893 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 14:17:24.531081  648893 ssh_runner.go:195] Run: which crictl
	I0520 14:17:24.531085  648893 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0520 14:17:24.555798  648893 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0520 14:17:24.555854  648893 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 14:17:24.555893  648893 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0520 14:17:24.555902  648893 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0520 14:17:24.555895  648893 ssh_runner.go:195] Run: which crictl
	I0520 14:17:24.555962  648893 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0520 14:17:24.596342  648893 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0520 14:17:24.596539  648893 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 14:17:24.596563  648893 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0520 14:17:24.635148  648893 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0520 14:17:24.635331  648893 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0520 14:17:24.661608  648893 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0520 14:17:24.661672  648893 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0520 14:17:24.686819  648893 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0520 14:17:24.686915  648893 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0520 14:17:24.708359  648893 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0520 14:17:24.986024  648893 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 14:17:25.124773  648893 cache_images.go:92] duration metric: took 1.019295768s to LoadCachedImages
	W0520 14:17:25.124877  648893 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18929-602525/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0520 14:17:25.124896  648893 kubeadm.go:928] updating node { 192.168.39.196 8443 v1.20.0 crio true true} ...
	I0520 14:17:25.125042  648893 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-366203 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-366203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 14:17:25.125131  648893 ssh_runner.go:195] Run: crio config
	I0520 14:17:25.179272  648893 cni.go:84] Creating CNI manager for ""
	I0520 14:17:25.179296  648893 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 14:17:25.179316  648893 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 14:17:25.179338  648893 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-366203 NodeName:kubernetes-upgrade-366203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0520 14:17:25.179471  648893 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-366203"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 14:17:25.179536  648893 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0520 14:17:25.189222  648893 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 14:17:25.189325  648893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 14:17:25.198300  648893 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0520 14:17:25.215114  648893 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 14:17:25.231132  648893 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0520 14:17:25.247199  648893 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I0520 14:17:25.250874  648893 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 14:17:25.262360  648893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:17:25.378239  648893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 14:17:25.395051  648893 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203 for IP: 192.168.39.196
	I0520 14:17:25.395083  648893 certs.go:194] generating shared ca certs ...
	I0520 14:17:25.395107  648893 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:17:25.395313  648893 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 14:17:25.395371  648893 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 14:17:25.395382  648893 certs.go:256] generating profile certs ...
	I0520 14:17:25.395453  648893 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/client.key
	I0520 14:17:25.395473  648893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/client.crt with IP's: []
	I0520 14:17:25.661258  648893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/client.crt ...
	I0520 14:17:25.661302  648893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/client.crt: {Name:mk3f189547849ae1c4932cfd566ac16907a1874a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:17:25.661480  648893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/client.key ...
	I0520 14:17:25.661501  648893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/client.key: {Name:mk87723f6165de4dddd4cdc4bfce86e32bc1be92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:17:25.661616  648893 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/apiserver.key.8da18aa8
	I0520 14:17:25.661634  648893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/apiserver.crt.8da18aa8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196]
	I0520 14:17:25.899434  648893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/apiserver.crt.8da18aa8 ...
	I0520 14:17:25.899481  648893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/apiserver.crt.8da18aa8: {Name:mka09a31f76ee228537071599c31a9d9ccdca052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:17:25.899675  648893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/apiserver.key.8da18aa8 ...
	I0520 14:17:25.899693  648893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/apiserver.key.8da18aa8: {Name:mk5aaffbf85c8574ef3bcc101e8e7809d388ca0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:17:25.899786  648893 certs.go:381] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/apiserver.crt.8da18aa8 -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/apiserver.crt
	I0520 14:17:25.899885  648893 certs.go:385] copying /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/apiserver.key.8da18aa8 -> /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/apiserver.key
	I0520 14:17:25.899969  648893 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/proxy-client.key
	I0520 14:17:25.899991  648893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/proxy-client.crt with IP's: []
	I0520 14:17:26.047147  648893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/proxy-client.crt ...
	I0520 14:17:26.047185  648893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/proxy-client.crt: {Name:mk34a297e944c18089e39a92bca8fee94bd4d477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:17:26.114215  648893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/proxy-client.key ...
	I0520 14:17:26.114273  648893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/proxy-client.key: {Name:mk8d64deec38136b8a1d1718321506e47f20ed93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:17:26.114646  648893 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem (1338 bytes)
	W0520 14:17:26.114715  648893 certs.go:480] ignoring /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867_empty.pem, impossibly tiny 0 bytes
	I0520 14:17:26.114739  648893 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 14:17:26.114786  648893 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 14:17:26.114830  648893 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 14:17:26.114879  648893 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 14:17:26.114981  648893 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 14:17:26.115988  648893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 14:17:26.160452  648893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 14:17:26.188928  648893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 14:17:26.213626  648893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 14:17:26.237448  648893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0520 14:17:26.264298  648893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 14:17:26.340880  648893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 14:17:26.367690  648893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 14:17:26.393347  648893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 14:17:26.414946  648893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem --> /usr/share/ca-certificates/609867.pem (1338 bytes)
	I0520 14:17:26.453402  648893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /usr/share/ca-certificates/6098672.pem (1708 bytes)
	I0520 14:17:26.485467  648893 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 14:17:26.506448  648893 ssh_runner.go:195] Run: openssl version
	I0520 14:17:26.512512  648893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6098672.pem && ln -fs /usr/share/ca-certificates/6098672.pem /etc/ssl/certs/6098672.pem"
	I0520 14:17:26.526619  648893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6098672.pem
	I0520 14:17:26.531169  648893 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 14:17:26.531237  648893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6098672.pem
	I0520 14:17:26.537142  648893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6098672.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 14:17:26.548103  648893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 14:17:26.563008  648893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:17:26.568941  648893 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:17:26.569007  648893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:17:26.577282  648893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 14:17:26.592258  648893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/609867.pem && ln -fs /usr/share/ca-certificates/609867.pem /etc/ssl/certs/609867.pem"
	I0520 14:17:26.604055  648893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/609867.pem
	I0520 14:17:26.608687  648893 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 14:17:26.608762  648893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/609867.pem
	I0520 14:17:26.615674  648893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/609867.pem /etc/ssl/certs/51391683.0"
	I0520 14:17:26.630096  648893 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 14:17:26.634647  648893 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 14:17:26.634764  648893 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-366203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-366203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:17:26.634891  648893 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 14:17:26.634956  648893 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 14:17:26.673429  648893 cri.go:89] found id: ""
	I0520 14:17:26.673523  648893 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 14:17:26.684052  648893 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 14:17:26.693893  648893 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 14:17:26.703030  648893 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 14:17:26.703054  648893 kubeadm.go:156] found existing configuration files:
	
	I0520 14:17:26.703113  648893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 14:17:26.711714  648893 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 14:17:26.711783  648893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 14:17:26.725603  648893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 14:17:26.738851  648893 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 14:17:26.738922  648893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 14:17:26.748816  648893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 14:17:26.758232  648893 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 14:17:26.758320  648893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 14:17:26.768075  648893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 14:17:26.777849  648893 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 14:17:26.777920  648893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 14:17:26.788512  648893 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 14:17:27.109773  648893 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 14:19:24.959425  648893 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 14:19:24.959582  648893 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0520 14:19:24.961159  648893 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 14:19:24.961231  648893 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 14:19:24.961339  648893 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 14:19:24.961466  648893 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 14:19:24.961596  648893 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 14:19:24.961679  648893 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 14:19:24.964497  648893 out.go:204]   - Generating certificates and keys ...
	I0520 14:19:24.964609  648893 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 14:19:24.964706  648893 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 14:19:24.964815  648893 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 14:19:24.964892  648893 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 14:19:24.964962  648893 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 14:19:24.965024  648893 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 14:19:24.965139  648893 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 14:19:24.965370  648893 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-366203 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	I0520 14:19:24.965455  648893 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 14:19:24.965649  648893 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-366203 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	I0520 14:19:24.965753  648893 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 14:19:24.965810  648893 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 14:19:24.965875  648893 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 14:19:24.965955  648893 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 14:19:24.966061  648893 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 14:19:24.966131  648893 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 14:19:24.966232  648893 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 14:19:24.966298  648893 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 14:19:24.966455  648893 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 14:19:24.966595  648893 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 14:19:24.966664  648893 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 14:19:24.966767  648893 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 14:19:24.969451  648893 out.go:204]   - Booting up control plane ...
	I0520 14:19:24.969581  648893 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 14:19:24.969704  648893 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 14:19:24.969816  648893 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 14:19:24.969948  648893 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 14:19:24.970189  648893 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 14:19:24.970267  648893 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 14:19:24.970367  648893 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 14:19:24.970623  648893 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 14:19:24.970729  648893 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 14:19:24.970998  648893 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 14:19:24.971098  648893 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 14:19:24.971360  648893 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 14:19:24.971467  648893 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 14:19:24.971749  648893 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 14:19:24.971854  648893 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 14:19:24.972112  648893 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 14:19:24.972140  648893 kubeadm.go:309] 
	I0520 14:19:24.972206  648893 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 14:19:24.972273  648893 kubeadm.go:309] 		timed out waiting for the condition
	I0520 14:19:24.972284  648893 kubeadm.go:309] 
	I0520 14:19:24.972342  648893 kubeadm.go:309] 	This error is likely caused by:
	I0520 14:19:24.972388  648893 kubeadm.go:309] 		- The kubelet is not running
	I0520 14:19:24.972530  648893 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 14:19:24.972541  648893 kubeadm.go:309] 
	I0520 14:19:24.972690  648893 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 14:19:24.972728  648893 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 14:19:24.972778  648893 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 14:19:24.972789  648893 kubeadm.go:309] 
	I0520 14:19:24.972933  648893 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 14:19:24.973053  648893 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 14:19:24.973071  648893 kubeadm.go:309] 
	I0520 14:19:24.973205  648893 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 14:19:24.973366  648893 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 14:19:24.973481  648893 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 14:19:24.973585  648893 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 14:19:24.973636  648893 kubeadm.go:309] 
	W0520 14:19:24.973788  648893 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-366203 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-366203 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-366203 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-366203 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0520 14:19:24.973850  648893 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 14:19:26.189028  648893 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.215139276s)
	I0520 14:19:26.189142  648893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 14:19:26.203496  648893 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 14:19:26.217136  648893 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 14:19:26.217175  648893 kubeadm.go:156] found existing configuration files:
	
	I0520 14:19:26.217265  648893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 14:19:26.228317  648893 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 14:19:26.228394  648893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 14:19:26.240232  648893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 14:19:26.253584  648893 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 14:19:26.253673  648893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 14:19:26.267935  648893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 14:19:26.280532  648893 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 14:19:26.280604  648893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 14:19:26.291891  648893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 14:19:26.304619  648893 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 14:19:26.304693  648893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 14:19:26.317532  648893 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 14:19:26.396346  648893 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 14:19:26.396507  648893 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 14:19:26.572287  648893 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 14:19:26.572435  648893 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 14:19:26.572567  648893 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 14:19:26.800582  648893 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 14:19:26.863675  648893 out.go:204]   - Generating certificates and keys ...
	I0520 14:19:26.863828  648893 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 14:19:26.863957  648893 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 14:19:26.864073  648893 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 14:19:26.864152  648893 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 14:19:26.864229  648893 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 14:19:26.864303  648893 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 14:19:26.864402  648893 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 14:19:26.864486  648893 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 14:19:26.864592  648893 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 14:19:26.864728  648893 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 14:19:26.864793  648893 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 14:19:26.864881  648893 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 14:19:26.975620  648893 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 14:19:27.567542  648893 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 14:19:27.860411  648893 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 14:19:28.135149  648893 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 14:19:28.152034  648893 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 14:19:28.153680  648893 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 14:19:28.153768  648893 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 14:19:28.288058  648893 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 14:19:28.290725  648893 out.go:204]   - Booting up control plane ...
	I0520 14:19:28.290868  648893 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 14:19:28.301699  648893 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 14:19:28.303088  648893 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 14:19:28.304251  648893 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 14:19:28.307541  648893 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 14:20:08.308958  648893 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 14:20:08.309235  648893 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 14:20:08.309516  648893 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 14:20:13.310021  648893 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 14:20:13.310234  648893 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 14:20:23.310740  648893 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 14:20:23.310916  648893 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 14:20:43.311961  648893 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 14:20:43.312171  648893 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 14:21:23.312121  648893 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 14:21:23.312530  648893 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 14:21:23.315080  648893 kubeadm.go:309] 
	I0520 14:21:23.315120  648893 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 14:21:23.315168  648893 kubeadm.go:309] 		timed out waiting for the condition
	I0520 14:21:23.315182  648893 kubeadm.go:309] 
	I0520 14:21:23.315230  648893 kubeadm.go:309] 	This error is likely caused by:
	I0520 14:21:23.315277  648893 kubeadm.go:309] 		- The kubelet is not running
	I0520 14:21:23.315414  648893 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 14:21:23.315428  648893 kubeadm.go:309] 
	I0520 14:21:23.315559  648893 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 14:21:23.315597  648893 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 14:21:23.315656  648893 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 14:21:23.315684  648893 kubeadm.go:309] 
	I0520 14:21:23.315845  648893 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 14:21:23.315949  648893 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 14:21:23.315964  648893 kubeadm.go:309] 
	I0520 14:21:23.316138  648893 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 14:21:23.316255  648893 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 14:21:23.316391  648893 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 14:21:23.316539  648893 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 14:21:23.316606  648893 kubeadm.go:309] 
	I0520 14:21:23.316754  648893 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 14:21:23.316863  648893 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 14:21:23.317039  648893 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0520 14:21:23.317064  648893 kubeadm.go:393] duration metric: took 3m56.682306311s to StartCluster
	I0520 14:21:23.317122  648893 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 14:21:23.317200  648893 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 14:21:23.364120  648893 cri.go:89] found id: ""
	I0520 14:21:23.364160  648893 logs.go:276] 0 containers: []
	W0520 14:21:23.364170  648893 logs.go:278] No container was found matching "kube-apiserver"
	I0520 14:21:23.364176  648893 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 14:21:23.364235  648893 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 14:21:23.408517  648893 cri.go:89] found id: ""
	I0520 14:21:23.408547  648893 logs.go:276] 0 containers: []
	W0520 14:21:23.408555  648893 logs.go:278] No container was found matching "etcd"
	I0520 14:21:23.408563  648893 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 14:21:23.408640  648893 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 14:21:23.451000  648893 cri.go:89] found id: ""
	I0520 14:21:23.451031  648893 logs.go:276] 0 containers: []
	W0520 14:21:23.451039  648893 logs.go:278] No container was found matching "coredns"
	I0520 14:21:23.451046  648893 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 14:21:23.451112  648893 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 14:21:23.487281  648893 cri.go:89] found id: ""
	I0520 14:21:23.487314  648893 logs.go:276] 0 containers: []
	W0520 14:21:23.487326  648893 logs.go:278] No container was found matching "kube-scheduler"
	I0520 14:21:23.487335  648893 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 14:21:23.487410  648893 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 14:21:23.523701  648893 cri.go:89] found id: ""
	I0520 14:21:23.523728  648893 logs.go:276] 0 containers: []
	W0520 14:21:23.523736  648893 logs.go:278] No container was found matching "kube-proxy"
	I0520 14:21:23.523742  648893 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 14:21:23.523794  648893 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 14:21:23.558276  648893 cri.go:89] found id: ""
	I0520 14:21:23.558311  648893 logs.go:276] 0 containers: []
	W0520 14:21:23.558323  648893 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 14:21:23.558332  648893 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 14:21:23.558390  648893 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 14:21:23.594222  648893 cri.go:89] found id: ""
	I0520 14:21:23.594249  648893 logs.go:276] 0 containers: []
	W0520 14:21:23.594266  648893 logs.go:278] No container was found matching "kindnet"
	I0520 14:21:23.594279  648893 logs.go:123] Gathering logs for kubelet ...
	I0520 14:21:23.594295  648893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 14:21:23.689423  648893 logs.go:123] Gathering logs for dmesg ...
	I0520 14:21:23.689473  648893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 14:21:23.704088  648893 logs.go:123] Gathering logs for describe nodes ...
	I0520 14:21:23.704120  648893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 14:21:23.826525  648893 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 14:21:23.826549  648893 logs.go:123] Gathering logs for CRI-O ...
	I0520 14:21:23.826562  648893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 14:21:23.946770  648893 logs.go:123] Gathering logs for container status ...
	I0520 14:21:23.946813  648893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0520 14:21:23.990694  648893 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0520 14:21:23.990749  648893 out.go:239] * 
	* 
	W0520 14:21:23.990831  648893 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 14:21:23.990868  648893 out.go:239] * 
	* 
	W0520 14:21:23.991725  648893 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 14:21:23.995999  648893 out.go:177] 
	W0520 14:21:23.998017  648893 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 14:21:23.998083  648893 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0520 14:21:23.998107  648893 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0520 14:21:24.000350  648893 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-366203 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-366203
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-366203: (6.347937275s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-366203 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-366203 status --format={{.Host}}: exit status 7 (69.775269ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-366203 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-366203 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.252832563s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-366203 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-366203 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-366203 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (103.827802ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-366203] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18929
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-366203
	    minikube start -p kubernetes-upgrade-366203 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3662032 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-366203 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-366203 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-366203 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m44.30251375s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-05-20 14:23:58.199086149 +0000 UTC m=+5384.581240371
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-366203 -n kubernetes-upgrade-366203
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-366203 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-366203 logs -n 25: (1.346471617s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-862860 sudo crictl                        | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | pods                                                 |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo crictl                        | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | ps --all                                             |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo find                          | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | /etc/cni -type f -exec sh -c                         |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo ip a s                        | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	| ssh     | -p kindnet-862860 sudo ip r s                        | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	| ssh     | -p kindnet-862860 sudo                               | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | iptables-save                                        |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo                               | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | iptables -t nat -L -n -v                             |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo                               | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo                               | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo                               | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo cat                           | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo cat                           | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo                               | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo                               | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo cat                           | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo docker                        | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo                               | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo                               | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo cat                           | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo cat                           | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo                               | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo                               | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo                               | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo cat                           | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC | 20 May 24 14:23 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p kindnet-862860 sudo cat                           | kindnet-862860 | jenkins | v1.33.1 | 20 May 24 14:23 UTC |                     |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 14:23:35
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 14:23:35.974385  658816 out.go:291] Setting OutFile to fd 1 ...
	I0520 14:23:35.974670  658816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 14:23:35.974682  658816 out.go:304] Setting ErrFile to fd 2...
	I0520 14:23:35.974687  658816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 14:23:35.974930  658816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 14:23:35.975570  658816 out.go:298] Setting JSON to false
	I0520 14:23:35.976673  658816 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14756,"bootTime":1716200260,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 14:23:35.976762  658816 start.go:139] virtualization: kvm guest
	I0520 14:23:35.980014  658816 out.go:177] * [custom-flannel-862860] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 14:23:35.982523  658816 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 14:23:35.982557  658816 notify.go:220] Checking for updates...
	I0520 14:23:35.984948  658816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 14:23:35.987468  658816 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 14:23:35.989786  658816 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 14:23:35.992062  658816 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 14:23:35.994352  658816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 14:23:35.997062  658816 config.go:182] Loaded profile config "calico-862860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:23:35.997193  658816 config.go:182] Loaded profile config "kindnet-862860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:23:35.997404  658816 config.go:182] Loaded profile config "kubernetes-upgrade-366203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:23:35.997553  658816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 14:23:36.036341  658816 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 14:23:36.038987  658816 start.go:297] selected driver: kvm2
	I0520 14:23:36.039024  658816 start.go:901] validating driver "kvm2" against <nil>
	I0520 14:23:36.039044  658816 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 14:23:36.040254  658816 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 14:23:36.040352  658816 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 14:23:36.057440  658816 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 14:23:36.057510  658816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 14:23:36.057776  658816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 14:23:36.057805  658816 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0520 14:23:36.057823  658816 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0520 14:23:36.057877  658816 start.go:340] cluster config:
	{Name:custom-flannel-862860 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-862860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:23:36.057969  658816 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 14:23:36.060841  658816 out.go:177] * Starting "custom-flannel-862860" primary control-plane node in "custom-flannel-862860" cluster
	I0520 14:23:36.063216  658816 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 14:23:36.063279  658816 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 14:23:36.063292  658816 cache.go:56] Caching tarball of preloaded images
	I0520 14:23:36.063433  658816 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 14:23:36.063451  658816 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 14:23:36.063567  658816 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/custom-flannel-862860/config.json ...
	I0520 14:23:36.063601  658816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/custom-flannel-862860/config.json: {Name:mk15b8488288c6eceb5da40fb4fd9cae5ba634d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:23:36.063808  658816 start.go:360] acquireMachinesLock for custom-flannel-862860: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 14:23:36.063863  658816 start.go:364] duration metric: took 35.509µs to acquireMachinesLock for "custom-flannel-862860"
	I0520 14:23:36.063896  658816 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-862860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-862860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 14:23:36.064002  658816 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 14:23:34.939897  658162 out.go:204]   - Booting up control plane ...
	I0520 14:23:34.940063  658162 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 14:23:34.940193  658162 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 14:23:34.940297  658162 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 14:23:34.961857  658162 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 14:23:34.964392  658162 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 14:23:34.964463  658162 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 14:23:35.131491  658162 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 14:23:35.131620  658162 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 14:23:36.132900  658162 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001960578s
	I0520 14:23:36.133024  658162 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 14:23:36.066590  658816 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 14:23:36.066794  658816 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:23:36.066861  658816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:23:36.083713  658816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35467
	I0520 14:23:36.084336  658816 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:23:36.085004  658816 main.go:141] libmachine: Using API Version  1
	I0520 14:23:36.085032  658816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:23:36.085494  658816 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:23:36.085725  658816 main.go:141] libmachine: (custom-flannel-862860) Calling .GetMachineName
	I0520 14:23:36.085946  658816 main.go:141] libmachine: (custom-flannel-862860) Calling .DriverName
	I0520 14:23:36.086166  658816 start.go:159] libmachine.API.Create for "custom-flannel-862860" (driver="kvm2")
	I0520 14:23:36.086203  658816 client.go:168] LocalClient.Create starting
	I0520 14:23:36.086243  658816 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem
	I0520 14:23:36.086279  658816 main.go:141] libmachine: Decoding PEM data...
	I0520 14:23:36.086297  658816 main.go:141] libmachine: Parsing certificate...
	I0520 14:23:36.086364  658816 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem
	I0520 14:23:36.086392  658816 main.go:141] libmachine: Decoding PEM data...
	I0520 14:23:36.086410  658816 main.go:141] libmachine: Parsing certificate...
	I0520 14:23:36.086433  658816 main.go:141] libmachine: Running pre-create checks...
	I0520 14:23:36.086449  658816 main.go:141] libmachine: (custom-flannel-862860) Calling .PreCreateCheck
	I0520 14:23:36.086837  658816 main.go:141] libmachine: (custom-flannel-862860) Calling .GetConfigRaw
	I0520 14:23:36.087323  658816 main.go:141] libmachine: Creating machine...
	I0520 14:23:36.087345  658816 main.go:141] libmachine: (custom-flannel-862860) Calling .Create
	I0520 14:23:36.087504  658816 main.go:141] libmachine: (custom-flannel-862860) Creating KVM machine...
	I0520 14:23:36.089218  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | found existing default KVM network
	I0520 14:23:36.090966  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:36.090773  658839 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c9:7d:66} reservation:<nil>}
	I0520 14:23:36.092329  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:36.092221  658839 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:30:76:af} reservation:<nil>}
	I0520 14:23:36.093785  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:36.093690  658839 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:d9:a3:31} reservation:<nil>}
	I0520 14:23:36.095444  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:36.095358  658839 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00032bb60}
	I0520 14:23:36.095544  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | created network xml: 
	I0520 14:23:36.095575  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | <network>
	I0520 14:23:36.095588  658816 main.go:141] libmachine: (custom-flannel-862860) DBG |   <name>mk-custom-flannel-862860</name>
	I0520 14:23:36.095602  658816 main.go:141] libmachine: (custom-flannel-862860) DBG |   <dns enable='no'/>
	I0520 14:23:36.095618  658816 main.go:141] libmachine: (custom-flannel-862860) DBG |   
	I0520 14:23:36.095628  658816 main.go:141] libmachine: (custom-flannel-862860) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0520 14:23:36.095640  658816 main.go:141] libmachine: (custom-flannel-862860) DBG |     <dhcp>
	I0520 14:23:36.095650  658816 main.go:141] libmachine: (custom-flannel-862860) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0520 14:23:36.095658  658816 main.go:141] libmachine: (custom-flannel-862860) DBG |     </dhcp>
	I0520 14:23:36.095670  658816 main.go:141] libmachine: (custom-flannel-862860) DBG |   </ip>
	I0520 14:23:36.095696  658816 main.go:141] libmachine: (custom-flannel-862860) DBG |   
	I0520 14:23:36.095724  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | </network>
	I0520 14:23:36.095761  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | 
	I0520 14:23:36.102284  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | trying to create private KVM network mk-custom-flannel-862860 192.168.72.0/24...
	I0520 14:23:36.212496  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | private KVM network mk-custom-flannel-862860 192.168.72.0/24 created
	I0520 14:23:36.212551  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:36.212388  658839 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 14:23:36.212566  658816 main.go:141] libmachine: (custom-flannel-862860) Setting up store path in /home/jenkins/minikube-integration/18929-602525/.minikube/machines/custom-flannel-862860 ...
	I0520 14:23:36.212584  658816 main.go:141] libmachine: (custom-flannel-862860) Building disk image from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 14:23:36.212622  658816 main.go:141] libmachine: (custom-flannel-862860) Downloading /home/jenkins/minikube-integration/18929-602525/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 14:23:36.530984  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:36.530807  658839 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/custom-flannel-862860/id_rsa...
	I0520 14:23:36.761637  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:36.761500  658839 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/custom-flannel-862860/custom-flannel-862860.rawdisk...
	I0520 14:23:36.761672  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | Writing magic tar header
	I0520 14:23:36.761688  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | Writing SSH key tar header
	I0520 14:23:36.761706  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:36.761671  658839 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/custom-flannel-862860 ...
	I0520 14:23:36.761936  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/custom-flannel-862860
	I0520 14:23:36.761982  658816 main.go:141] libmachine: (custom-flannel-862860) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines/custom-flannel-862860 (perms=drwx------)
	I0520 14:23:36.761996  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube/machines
	I0520 14:23:36.762012  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 14:23:36.762022  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18929-602525
	I0520 14:23:36.762040  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 14:23:36.762049  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | Checking permissions on dir: /home/jenkins
	I0520 14:23:36.762102  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | Checking permissions on dir: /home
	I0520 14:23:36.762119  658816 main.go:141] libmachine: (custom-flannel-862860) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube/machines (perms=drwxr-xr-x)
	I0520 14:23:36.762128  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | Skipping /home - not owner
	I0520 14:23:36.762170  658816 main.go:141] libmachine: (custom-flannel-862860) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525/.minikube (perms=drwxr-xr-x)
	I0520 14:23:36.762226  658816 main.go:141] libmachine: (custom-flannel-862860) Setting executable bit set on /home/jenkins/minikube-integration/18929-602525 (perms=drwxrwxr-x)
	I0520 14:23:36.762239  658816 main.go:141] libmachine: (custom-flannel-862860) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 14:23:36.762253  658816 main.go:141] libmachine: (custom-flannel-862860) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 14:23:36.762267  658816 main.go:141] libmachine: (custom-flannel-862860) Creating domain...
	I0520 14:23:36.763544  658816 main.go:141] libmachine: (custom-flannel-862860) define libvirt domain using xml: 
	I0520 14:23:36.763566  658816 main.go:141] libmachine: (custom-flannel-862860) <domain type='kvm'>
	I0520 14:23:36.763577  658816 main.go:141] libmachine: (custom-flannel-862860)   <name>custom-flannel-862860</name>
	I0520 14:23:36.763589  658816 main.go:141] libmachine: (custom-flannel-862860)   <memory unit='MiB'>3072</memory>
	I0520 14:23:36.763598  658816 main.go:141] libmachine: (custom-flannel-862860)   <vcpu>2</vcpu>
	I0520 14:23:36.763611  658816 main.go:141] libmachine: (custom-flannel-862860)   <features>
	I0520 14:23:36.763625  658816 main.go:141] libmachine: (custom-flannel-862860)     <acpi/>
	I0520 14:23:36.763635  658816 main.go:141] libmachine: (custom-flannel-862860)     <apic/>
	I0520 14:23:36.763642  658816 main.go:141] libmachine: (custom-flannel-862860)     <pae/>
	I0520 14:23:36.763651  658816 main.go:141] libmachine: (custom-flannel-862860)     
	I0520 14:23:36.763659  658816 main.go:141] libmachine: (custom-flannel-862860)   </features>
	I0520 14:23:36.763669  658816 main.go:141] libmachine: (custom-flannel-862860)   <cpu mode='host-passthrough'>
	I0520 14:23:36.763676  658816 main.go:141] libmachine: (custom-flannel-862860)   
	I0520 14:23:36.763685  658816 main.go:141] libmachine: (custom-flannel-862860)   </cpu>
	I0520 14:23:36.763692  658816 main.go:141] libmachine: (custom-flannel-862860)   <os>
	I0520 14:23:36.763702  658816 main.go:141] libmachine: (custom-flannel-862860)     <type>hvm</type>
	I0520 14:23:36.763709  658816 main.go:141] libmachine: (custom-flannel-862860)     <boot dev='cdrom'/>
	I0520 14:23:36.763718  658816 main.go:141] libmachine: (custom-flannel-862860)     <boot dev='hd'/>
	I0520 14:23:36.763726  658816 main.go:141] libmachine: (custom-flannel-862860)     <bootmenu enable='no'/>
	I0520 14:23:36.763735  658816 main.go:141] libmachine: (custom-flannel-862860)   </os>
	I0520 14:23:36.763743  658816 main.go:141] libmachine: (custom-flannel-862860)   <devices>
	I0520 14:23:36.763753  658816 main.go:141] libmachine: (custom-flannel-862860)     <disk type='file' device='cdrom'>
	I0520 14:23:36.763769  658816 main.go:141] libmachine: (custom-flannel-862860)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/custom-flannel-862860/boot2docker.iso'/>
	I0520 14:23:36.763779  658816 main.go:141] libmachine: (custom-flannel-862860)       <target dev='hdc' bus='scsi'/>
	I0520 14:23:36.763786  658816 main.go:141] libmachine: (custom-flannel-862860)       <readonly/>
	I0520 14:23:36.763795  658816 main.go:141] libmachine: (custom-flannel-862860)     </disk>
	I0520 14:23:36.763804  658816 main.go:141] libmachine: (custom-flannel-862860)     <disk type='file' device='disk'>
	I0520 14:23:36.763815  658816 main.go:141] libmachine: (custom-flannel-862860)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 14:23:36.763831  658816 main.go:141] libmachine: (custom-flannel-862860)       <source file='/home/jenkins/minikube-integration/18929-602525/.minikube/machines/custom-flannel-862860/custom-flannel-862860.rawdisk'/>
	I0520 14:23:36.763841  658816 main.go:141] libmachine: (custom-flannel-862860)       <target dev='hda' bus='virtio'/>
	I0520 14:23:36.763852  658816 main.go:141] libmachine: (custom-flannel-862860)     </disk>
	I0520 14:23:36.763860  658816 main.go:141] libmachine: (custom-flannel-862860)     <interface type='network'>
	I0520 14:23:36.763872  658816 main.go:141] libmachine: (custom-flannel-862860)       <source network='mk-custom-flannel-862860'/>
	I0520 14:23:36.763881  658816 main.go:141] libmachine: (custom-flannel-862860)       <model type='virtio'/>
	I0520 14:23:36.763888  658816 main.go:141] libmachine: (custom-flannel-862860)     </interface>
	I0520 14:23:36.763898  658816 main.go:141] libmachine: (custom-flannel-862860)     <interface type='network'>
	I0520 14:23:36.763906  658816 main.go:141] libmachine: (custom-flannel-862860)       <source network='default'/>
	I0520 14:23:36.763916  658816 main.go:141] libmachine: (custom-flannel-862860)       <model type='virtio'/>
	I0520 14:23:36.763922  658816 main.go:141] libmachine: (custom-flannel-862860)     </interface>
	I0520 14:23:36.763931  658816 main.go:141] libmachine: (custom-flannel-862860)     <serial type='pty'>
	I0520 14:23:36.763939  658816 main.go:141] libmachine: (custom-flannel-862860)       <target port='0'/>
	I0520 14:23:36.763949  658816 main.go:141] libmachine: (custom-flannel-862860)     </serial>
	I0520 14:23:36.763957  658816 main.go:141] libmachine: (custom-flannel-862860)     <console type='pty'>
	I0520 14:23:36.763966  658816 main.go:141] libmachine: (custom-flannel-862860)       <target type='serial' port='0'/>
	I0520 14:23:36.763973  658816 main.go:141] libmachine: (custom-flannel-862860)     </console>
	I0520 14:23:36.763982  658816 main.go:141] libmachine: (custom-flannel-862860)     <rng model='virtio'>
	I0520 14:23:36.763991  658816 main.go:141] libmachine: (custom-flannel-862860)       <backend model='random'>/dev/random</backend>
	I0520 14:23:36.764000  658816 main.go:141] libmachine: (custom-flannel-862860)     </rng>
	I0520 14:23:36.764008  658816 main.go:141] libmachine: (custom-flannel-862860)     
	I0520 14:23:36.764017  658816 main.go:141] libmachine: (custom-flannel-862860)     
	I0520 14:23:36.764024  658816 main.go:141] libmachine: (custom-flannel-862860)   </devices>
	I0520 14:23:36.764034  658816 main.go:141] libmachine: (custom-flannel-862860) </domain>
	I0520 14:23:36.764042  658816 main.go:141] libmachine: (custom-flannel-862860) 
	I0520 14:23:36.771812  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | domain custom-flannel-862860 has defined MAC address 52:54:00:65:78:e5 in network default
	I0520 14:23:36.772777  658816 main.go:141] libmachine: (custom-flannel-862860) Ensuring networks are active...
	I0520 14:23:36.772801  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | domain custom-flannel-862860 has defined MAC address 52:54:00:69:06:5c in network mk-custom-flannel-862860
	I0520 14:23:36.773734  658816 main.go:141] libmachine: (custom-flannel-862860) Ensuring network default is active
	I0520 14:23:36.774095  658816 main.go:141] libmachine: (custom-flannel-862860) Ensuring network mk-custom-flannel-862860 is active
	I0520 14:23:36.774794  658816 main.go:141] libmachine: (custom-flannel-862860) Getting domain xml...
	I0520 14:23:36.775699  658816 main.go:141] libmachine: (custom-flannel-862860) Creating domain...
	I0520 14:23:38.191056  658816 main.go:141] libmachine: (custom-flannel-862860) Waiting to get IP...
	I0520 14:23:38.192120  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | domain custom-flannel-862860 has defined MAC address 52:54:00:69:06:5c in network mk-custom-flannel-862860
	I0520 14:23:38.192687  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | unable to find current IP address of domain custom-flannel-862860 in network mk-custom-flannel-862860
	I0520 14:23:38.192717  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:38.192654  658839 retry.go:31] will retry after 292.940027ms: waiting for machine to come up
	I0520 14:23:38.487576  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | domain custom-flannel-862860 has defined MAC address 52:54:00:69:06:5c in network mk-custom-flannel-862860
	I0520 14:23:38.488213  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | unable to find current IP address of domain custom-flannel-862860 in network mk-custom-flannel-862860
	I0520 14:23:38.488246  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:38.488151  658839 retry.go:31] will retry after 244.090174ms: waiting for machine to come up
	I0520 14:23:38.733584  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | domain custom-flannel-862860 has defined MAC address 52:54:00:69:06:5c in network mk-custom-flannel-862860
	I0520 14:23:38.734248  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | unable to find current IP address of domain custom-flannel-862860 in network mk-custom-flannel-862860
	I0520 14:23:38.734279  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:38.734188  658839 retry.go:31] will retry after 455.368577ms: waiting for machine to come up
	I0520 14:23:39.191717  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | domain custom-flannel-862860 has defined MAC address 52:54:00:69:06:5c in network mk-custom-flannel-862860
	I0520 14:23:39.192319  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | unable to find current IP address of domain custom-flannel-862860 in network mk-custom-flannel-862860
	I0520 14:23:39.192354  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:39.192287  658839 retry.go:31] will retry after 605.622695ms: waiting for machine to come up
	I0520 14:23:39.799203  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | domain custom-flannel-862860 has defined MAC address 52:54:00:69:06:5c in network mk-custom-flannel-862860
	I0520 14:23:39.799801  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | unable to find current IP address of domain custom-flannel-862860 in network mk-custom-flannel-862860
	I0520 14:23:39.799853  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:39.799781  658839 retry.go:31] will retry after 468.497239ms: waiting for machine to come up
	I0520 14:23:40.270442  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | domain custom-flannel-862860 has defined MAC address 52:54:00:69:06:5c in network mk-custom-flannel-862860
	I0520 14:23:40.270963  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | unable to find current IP address of domain custom-flannel-862860 in network mk-custom-flannel-862860
	I0520 14:23:40.271009  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:40.270874  658839 retry.go:31] will retry after 818.928736ms: waiting for machine to come up
	I0520 14:23:41.133771  658162 kubeadm.go:309] [api-check] The API server is healthy after 5.002241999s
	I0520 14:23:41.146448  658162 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 14:23:41.164339  658162 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 14:23:41.198007  658162 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 14:23:41.198289  658162 kubeadm.go:309] [mark-control-plane] Marking the node calico-862860 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 14:23:41.214475  658162 kubeadm.go:309] [bootstrap-token] Using token: jfrocj.turkkqhjboafa0sw
	I0520 14:23:41.216951  658162 out.go:204]   - Configuring RBAC rules ...
	I0520 14:23:41.217123  658162 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 14:23:41.221940  658162 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 14:23:41.235786  658162 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 14:23:41.240800  658162 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 14:23:41.256218  658162 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 14:23:41.265641  658162 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 14:23:41.545996  658162 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 14:23:41.969132  658162 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 14:23:42.545886  658162 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 14:23:42.546917  658162 kubeadm.go:309] 
	I0520 14:23:42.547002  658162 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 14:23:42.547014  658162 kubeadm.go:309] 
	I0520 14:23:42.547147  658162 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 14:23:42.547173  658162 kubeadm.go:309] 
	I0520 14:23:42.547224  658162 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 14:23:42.547311  658162 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 14:23:42.547382  658162 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 14:23:42.547392  658162 kubeadm.go:309] 
	I0520 14:23:42.547465  658162 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 14:23:42.547475  658162 kubeadm.go:309] 
	I0520 14:23:42.547539  658162 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 14:23:42.547557  658162 kubeadm.go:309] 
	I0520 14:23:42.547627  658162 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 14:23:42.547733  658162 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 14:23:42.547829  658162 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 14:23:42.547840  658162 kubeadm.go:309] 
	I0520 14:23:42.547928  658162 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 14:23:42.548007  658162 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 14:23:42.548020  658162 kubeadm.go:309] 
	I0520 14:23:42.548088  658162 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jfrocj.turkkqhjboafa0sw \
	I0520 14:23:42.548252  658162 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa \
	I0520 14:23:42.548281  658162 kubeadm.go:309] 	--control-plane 
	I0520 14:23:42.548286  658162 kubeadm.go:309] 
	I0520 14:23:42.548370  658162 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 14:23:42.548377  658162 kubeadm.go:309] 
	I0520 14:23:42.548477  658162 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jfrocj.turkkqhjboafa0sw \
	I0520 14:23:42.548623  658162 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20e463692964e6f5fa4e8551cf5b85a3811c33d04e14f645a7ba8f5bcc2686aa 
	I0520 14:23:42.548909  658162 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 14:23:42.549023  658162 cni.go:84] Creating CNI manager for "calico"
	I0520 14:23:42.551625  658162 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0520 14:23:42.553801  658162 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 14:23:42.553826  658162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (253815 bytes)
	I0520 14:23:42.571273  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 14:23:41.091673  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | domain custom-flannel-862860 has defined MAC address 52:54:00:69:06:5c in network mk-custom-flannel-862860
	I0520 14:23:41.092229  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | unable to find current IP address of domain custom-flannel-862860 in network mk-custom-flannel-862860
	I0520 14:23:41.092262  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:41.092165  658839 retry.go:31] will retry after 947.664412ms: waiting for machine to come up
	I0520 14:23:42.041357  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | domain custom-flannel-862860 has defined MAC address 52:54:00:69:06:5c in network mk-custom-flannel-862860
	I0520 14:23:42.041942  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | unable to find current IP address of domain custom-flannel-862860 in network mk-custom-flannel-862860
	I0520 14:23:42.041975  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:42.041867  658839 retry.go:31] will retry after 955.751094ms: waiting for machine to come up
	I0520 14:23:42.999105  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | domain custom-flannel-862860 has defined MAC address 52:54:00:69:06:5c in network mk-custom-flannel-862860
	I0520 14:23:42.999699  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | unable to find current IP address of domain custom-flannel-862860 in network mk-custom-flannel-862860
	I0520 14:23:42.999729  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:42.999631  658839 retry.go:31] will retry after 1.393424785s: waiting for machine to come up
	I0520 14:23:44.394419  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | domain custom-flannel-862860 has defined MAC address 52:54:00:69:06:5c in network mk-custom-flannel-862860
	I0520 14:23:44.395212  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | unable to find current IP address of domain custom-flannel-862860 in network mk-custom-flannel-862860
	I0520 14:23:44.395235  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:44.395142  658839 retry.go:31] will retry after 2.154843893s: waiting for machine to come up
	I0520 14:23:43.945747  658162 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.374425709s)
	I0520 14:23:43.945810  658162 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 14:23:43.945909  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:43.945926  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-862860 minikube.k8s.io/updated_at=2024_05_20T14_23_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45 minikube.k8s.io/name=calico-862860 minikube.k8s.io/primary=true
	I0520 14:23:43.972548  658162 ops.go:34] apiserver oom_adj: -16
	I0520 14:23:44.063982  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:44.565055  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:45.064573  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:45.564860  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:46.064961  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:46.564050  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:47.064665  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:47.564319  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:48.064977  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:48.108726  656347 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.343991729s)
	I0520 14:23:48.108755  656347 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 14:23:48.108815  656347 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 14:23:48.114419  656347 start.go:562] Will wait 60s for crictl version
	I0520 14:23:48.114497  656347 ssh_runner.go:195] Run: which crictl
	I0520 14:23:48.119199  656347 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 14:23:48.162542  656347 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 14:23:48.162650  656347 ssh_runner.go:195] Run: crio --version
	I0520 14:23:48.192456  656347 ssh_runner.go:195] Run: crio --version
	I0520 14:23:48.227271  656347 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 14:23:48.229583  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetIP
	I0520 14:23:48.233076  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:23:48.233557  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:23:48.233590  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:23:48.233796  656347 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 14:23:48.238482  656347 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-366203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:kubernetes-upgrade-366203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 14:23:48.238623  656347 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 14:23:48.238687  656347 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 14:23:48.286132  656347 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 14:23:48.286156  656347 crio.go:433] Images already preloaded, skipping extraction
	I0520 14:23:48.286213  656347 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 14:23:48.322551  656347 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 14:23:48.322586  656347 cache_images.go:84] Images are preloaded, skipping loading
	I0520 14:23:48.322597  656347 kubeadm.go:928] updating node { 192.168.39.196 8443 v1.30.1 crio true true} ...
	I0520 14:23:48.322726  656347 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-366203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-366203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 14:23:48.322796  656347 ssh_runner.go:195] Run: crio config
	I0520 14:23:48.382578  656347 cni.go:84] Creating CNI manager for ""
	I0520 14:23:48.382602  656347 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 14:23:48.382621  656347 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 14:23:48.382650  656347 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-366203 NodeName:kubernetes-upgrade-366203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 14:23:48.382798  656347 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-366203"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 14:23:48.382875  656347 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 14:23:48.393266  656347 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 14:23:48.393347  656347 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 14:23:48.403428  656347 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0520 14:23:48.419870  656347 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 14:23:48.436885  656347 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0520 14:23:48.453342  656347 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I0520 14:23:48.457377  656347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:23:48.612657  656347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 14:23:48.630991  656347 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203 for IP: 192.168.39.196
	I0520 14:23:48.631100  656347 certs.go:194] generating shared ca certs ...
	I0520 14:23:48.631134  656347 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:23:48.631334  656347 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 14:23:48.631375  656347 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 14:23:48.631385  656347 certs.go:256] generating profile certs ...
	I0520 14:23:48.631494  656347 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/client.key
	I0520 14:23:48.631556  656347 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/apiserver.key.8da18aa8
	I0520 14:23:48.631659  656347 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/proxy-client.key
	I0520 14:23:48.631790  656347 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem (1338 bytes)
	W0520 14:23:48.631818  656347 certs.go:480] ignoring /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867_empty.pem, impossibly tiny 0 bytes
	I0520 14:23:48.631828  656347 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 14:23:48.631848  656347 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 14:23:48.631869  656347 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 14:23:48.631891  656347 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 14:23:48.631926  656347 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 14:23:48.632545  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 14:23:48.660539  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 14:23:48.692233  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 14:23:48.719715  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 14:23:48.749096  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0520 14:23:48.774844  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 14:23:48.800577  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 14:23:48.826891  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 14:23:48.851018  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem --> /usr/share/ca-certificates/609867.pem (1338 bytes)
	I0520 14:23:48.876554  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /usr/share/ca-certificates/6098672.pem (1708 bytes)
	I0520 14:23:48.900133  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 14:23:48.923925  656347 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 14:23:48.939679  656347 ssh_runner.go:195] Run: openssl version
	I0520 14:23:46.552095  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | domain custom-flannel-862860 has defined MAC address 52:54:00:69:06:5c in network mk-custom-flannel-862860
	I0520 14:23:46.552676  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | unable to find current IP address of domain custom-flannel-862860 in network mk-custom-flannel-862860
	I0520 14:23:46.552734  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:46.552649  658839 retry.go:31] will retry after 1.805127395s: waiting for machine to come up
	I0520 14:23:48.359833  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | domain custom-flannel-862860 has defined MAC address 52:54:00:69:06:5c in network mk-custom-flannel-862860
	I0520 14:23:48.360474  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | unable to find current IP address of domain custom-flannel-862860 in network mk-custom-flannel-862860
	I0520 14:23:48.360503  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:48.360399  658839 retry.go:31] will retry after 3.390444051s: waiting for machine to come up
	I0520 14:23:48.564831  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:49.064912  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:49.564963  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:50.064961  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:50.564424  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:51.064179  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:51.564322  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:52.064281  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:52.564501  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:53.064097  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:48.946187  656347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 14:23:48.956870  656347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:23:48.961155  656347 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:23:48.961217  656347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:23:48.967050  656347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 14:23:48.976453  656347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/609867.pem && ln -fs /usr/share/ca-certificates/609867.pem /etc/ssl/certs/609867.pem"
	I0520 14:23:48.988355  656347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/609867.pem
	I0520 14:23:48.993062  656347 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 14:23:48.993131  656347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/609867.pem
	I0520 14:23:48.999222  656347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/609867.pem /etc/ssl/certs/51391683.0"
	I0520 14:23:49.009304  656347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6098672.pem && ln -fs /usr/share/ca-certificates/6098672.pem /etc/ssl/certs/6098672.pem"
	I0520 14:23:49.020997  656347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6098672.pem
	I0520 14:23:49.025719  656347 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 14:23:49.025783  656347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6098672.pem
	I0520 14:23:49.031466  656347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6098672.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 14:23:49.044016  656347 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 14:23:49.048715  656347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 14:23:49.054204  656347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 14:23:49.060610  656347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 14:23:49.066376  656347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 14:23:49.072065  656347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 14:23:49.077814  656347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 14:23:49.083417  656347 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-366203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.1 ClusterName:kubernetes-upgrade-366203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:23:49.083533  656347 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 14:23:49.083624  656347 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 14:23:49.131640  656347 cri.go:89] found id: "87b11a0755817d1fd2aa02f3007796a12ca9f1e8660c7210e102d50603de6a48"
	I0520 14:23:49.131669  656347 cri.go:89] found id: "bcf9c906614aa18155b4374fd7b9307efc626db0394dff13da1025f9c8cd1658"
	I0520 14:23:49.131674  656347 cri.go:89] found id: "c51ec6f170f18edf00453204cd488fdd7b72a96978a7435d3b06cf61199767a9"
	I0520 14:23:49.131678  656347 cri.go:89] found id: "bc1cf18bb1df08498c48818b03a574a97eef476eb81f9983c09ec99af2045482"
	I0520 14:23:49.131682  656347 cri.go:89] found id: "0d45eac5013431220e88af234c78475417fbd88bed9d0a3eada126a94f3e2d52"
	I0520 14:23:49.131685  656347 cri.go:89] found id: "699c6b78db9b53748d29e1c66fbd61044e5383268c92a7fb83736dc4bb71da8e"
	I0520 14:23:49.131689  656347 cri.go:89] found id: "8adb1ee817338e647f4b83fa8894062398467e838e30bd5e50bebac0084d14a4"
	I0520 14:23:49.131692  656347 cri.go:89] found id: ""
	I0520 14:23:49.131750  656347 ssh_runner.go:195] Run: sudo runc list -f json
	I0520 14:23:49.172048  656347 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0d45eac5013431220e88af234c78475417fbd88bed9d0a3eada126a94f3e2d52","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0d45eac5013431220e88af234c78475417fbd88bed9d0a3eada126a94f3e2d52/userdata","rootfs":"/var/lib/containers/storage/overlay/4b04745722d8db0245501017434c8e243fc8331fc3062b78505bbc32423d104e/merged","created":"2024-05-20T14:22:01.888009544Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ac6c6b5e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ac6c6b5e\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.t
erminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0d45eac5013431220e88af234c78475417fbd88bed9d0a3eada126a94f3e2d52","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-05-20T14:22:01.814386655Z","io.kubernetes.cri-o.Image":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.30.1","io.kubernetes.cri-o.ImageRef":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-366203\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8c6071bc1bd5a875e6f04e528de940b6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-366203_8c6071bc1bd5a875e6f04e528de940b6/kube-controller-manage
r/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4b04745722d8db0245501017434c8e243fc8331fc3062b78505bbc32423d104e/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-366203_kube-system_8c6071bc1bd5a875e6f04e528de940b6_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/a21456ead091b4c868300f2204e364a86da39d110e0d2e7711875a60adda4e52/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a21456ead091b4c868300f2204e364a86da39d110e0d2e7711875a60adda4e52","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-366203_kube-system_8c6071bc1bd5a875e6f04e528de940b6_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes"
:"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8c6071bc1bd5a875e6f04e528de940b6/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8c6071bc1bd5a875e6f04e528de940b6/containers/kube-controller-manager/d504aa01\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinu
x_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-366203","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8c6071bc1bd5a875e6f04e528de940b6","kubernetes.io/config.hash":"8c6071bc1bd5a875e6f04e528de940b6","kubernetes.io/config.seen":"2024-05-20T14:22:01.079806375Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10d60f77ed1405f3976e003ce1801309c4fb6407961bba9f2ee81b67a16588b9","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/10d60f77ed1405f3976e003ce1801309c4fb6407961bba9f2ee81b67a16588b9/userdata","rootfs":"/var/lib/containers/storage/overlay/7d0678e11e3b7201bffa0786c3f030a362aad1d41bd25a6133e8f7bea32b705e/merged","created":"2024
-05-20T14:22:01.61061974Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"7fd1e6da689a7cf27065fb5956e3f8ea\",\"kubernetes.io/config.seen\":\"2024-05-20T14:22:01.079807591Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod7fd1e6da689a7cf27065fb5956e3f8ea","io.kubernetes.cri-o.ContainerID":"10d60f77ed1405f3976e003ce1801309c4fb6407961bba9f2ee81b67a16588b9","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-05-20T14:22:01.551500194Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-366203","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/10d60f77ed1405f3976e003ce1801309c4fb6407961bba9f2ee81b
67a16588b9/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-scheduler-kubernetes-upgrade-366203","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"7fd1e6da689a7cf27065fb5956e3f8ea\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-366203\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-366203_7fd1e6da689a7cf27065fb5956e3f8ea/10d60f77ed1405f3976e003ce1801309c4fb6407961bba9f2ee81b67a16588b9.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-kubernetes-upgrade-366203\",\"uid\":\"7fd1e6da689a7cf27065fb5956e3f8ea\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7d0678e11e3b7201bffa0786c3f030a362aad1d41bd25a6133e8
f7bea32b705e/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/10d60f77ed1405f3976e003ce1801309c4fb6407961bba9f2ee81b67a16588b9/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"10d60f77ed1405f3976e003ce1801309c4fb6407961bba9f2ee81b67a16588b9","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.Sh
mPath":"/var/run/containers/storage/overlay-containers/10d60f77ed1405f3976e003ce1801309c4fb6407961bba9f2ee81b67a16588b9/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-366203","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"7fd1e6da689a7cf27065fb5956e3f8ea","kubernetes.io/config.hash":"7fd1e6da689a7cf27065fb5956e3f8ea","kubernetes.io/config.seen":"2024-05-20T14:22:01.079807591Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4c34637aae4b61cbc1810cc1ca278214783acda6eb06981330389819712a0b49","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/4c34637aae4b61cbc1810cc1ca278214783acda6eb06981330389819712a0b49/userdata","rootfs":"/var/lib/containers/storage/overlay/cb09d6b3a0b82de96caa70f3430766e8d54a6ac3cca4c6ec217571afdf5a6db5/merged","created":"2024-05-20T14:22:16.312651543Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","
io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.196:2379\",\"kubernetes.io/config.seen\":\"2024-05-20T14:22:01.128087480Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"ca6cfbdd5dced9c86779bd997ff5a13d\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podca6cfbdd5dced9c86779bd997ff5a13d","io.kubernetes.cri-o.ContainerID":"4c34637aae4b61cbc1810cc1ca278214783acda6eb06981330389819712a0b49","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-kubernetes-upgrade-366203_kube-system_ca6cfbdd5dced9c86779bd997ff5a13d_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-05-20T14:22:16.261792064Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-366203","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/4c34637aae4b61cbc1810cc1ca278214783acda6eb06981330389819712a0b49/userdata/hostname","io.kubernetes.cri-o.Image":"regist
ry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"etcd-kubernetes-upgrade-366203","io.kubernetes.cri-o.Labels":"{\"component\":\"etcd\",\"io.kubernetes.pod.uid\":\"ca6cfbdd5dced9c86779bd997ff5a13d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-366203\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-366203_ca6cfbdd5dced9c86779bd997ff5a13d/4c34637aae4b61cbc1810cc1ca278214783acda6eb06981330389819712a0b49.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-kubernetes-upgrade-366203\",\"uid\":\"ca6cfbdd5dced9c86779bd997ff5a13d\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cb09d6b3a0b82de96caa70f3430766e8d54a6ac3cca4c6ec217571afdf5a6db5/merged","io.kubernetes.cri-o.Name":"k8s_etcd-kubernetes-upgrade-366203_kube-system_ca6cf
bdd5dced9c86779bd997ff5a13d_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/4c34637aae4b61cbc1810cc1ca278214783acda6eb06981330389819712a0b49/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"4c34637aae4b61cbc1810cc1ca278214783acda6eb06981330389819712a0b49","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-366203_kube-system_ca6cfbdd5dced9c86779bd997ff5a13d_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/4c34637aae4b61cbc1810cc1ca278214783acda6eb06981330389819712a0b49/u
serdata/shm","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-366203","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ca6cfbdd5dced9c86779bd997ff5a13d","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.196:2379","kubernetes.io/config.hash":"ca6cfbdd5dced9c86779bd997ff5a13d","kubernetes.io/config.seen":"2024-05-20T14:22:01.128087480Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"699c6b78db9b53748d29e1c66fbd61044e5383268c92a7fb83736dc4bb71da8e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/699c6b78db9b53748d29e1c66fbd61044e5383268c92a7fb83736dc4bb71da8e/userdata","rootfs":"/var/lib/containers/storage/overlay/8d60b1b05b481c93b8f79f2f5ab791afddc39cb8087d85c3779ffa81f14d5d32/merged","created":"2024-05-20T14:22:01.880511994Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"62f90dcc","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.co
ntainer.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"62f90dcc\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"699c6b78db9b53748d29e1c66fbd61044e5383268c92a7fb83736dc4bb71da8e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-05-20T14:22:01.779844596Z","io.kubernetes.cri-o.Image":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.30.1","io.kubernetes.cri-o.ImageRef":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kub
e-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-366203\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b13e3994210c45dc653c312b9c3d77c6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-366203_b13e3994210c45dc653c312b9c3d77c6/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8d60b1b05b481c93b8f79f2f5ab791afddc39cb8087d85c3779ffa81f14d5d32/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-366203_kube-system_b13e3994210c45dc653c312b9c3d77c6_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/84f5785cfbff819acbb8949002126e8b3f6ded0dd93fc225073d19e5a3c9e593/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"84f5785cfbff819acbb8949002126e8b3f6ded0dd93fc225073d19e5a3c9e593","io.kubernetes.cri
-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-366203_kube-system_b13e3994210c45dc653c312b9c3d77c6_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b13e3994210c45dc653c312b9c3d77c6/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b13e3994210c45dc653c312b9c3d77c6/containers/kube-apiserver/d04eaea8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\
"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-366203","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b13e3994210c45dc653c312b9c3d77c6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.196:8443","kubernetes.io/config.hash":"b13e3994210c45dc653c312b9c3d77c6","kubernetes.io/config.seen":"2024-05-20T14:22:01.079802040Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"822b910c9fc5fcd888a881e7936f873f2fbfc5220819a025fd5a5a7a4e21f674","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/822b910c9fc5fcd888a881e7936f873f2fbfc5220819a025fd5a5a7a4e21f674/userdata","rootfs":"/var/lib/containers/storage/overlay/d9186d87cf7c61253dc86ca97b69353855190cb6171020c527c6628bc2dabbd3/merged","created":"2024-05-20T14:22:01.67
3135958Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-05-20T14:22:01.128087480Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"ca6cfbdd5dced9c86779bd997ff5a13d\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.196:2379\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podca6cfbdd5dced9c86779bd997ff5a13d","io.kubernetes.cri-o.ContainerID":"822b910c9fc5fcd888a881e7936f873f2fbfc5220819a025fd5a5a7a4e21f674","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-kubernetes-upgrade-366203_kube-system_ca6cfbdd5dced9c86779bd997ff5a13d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-05-20T14:22:01.577236484Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-366203","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/822b91
0c9fc5fcd888a881e7936f873f2fbfc5220819a025fd5a5a7a4e21f674/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"etcd-kubernetes-upgrade-366203","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"ca6cfbdd5dced9c86779bd997ff5a13d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-366203\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-366203_ca6cfbdd5dced9c86779bd997ff5a13d/822b910c9fc5fcd888a881e7936f873f2fbfc5220819a025fd5a5a7a4e21f674.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-kubernetes-upgrade-366203\",\"uid\":\"ca6cfbdd5dced9c86779bd997ff5a13d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d9186d87cf7c61253dc86ca97b69353855190cb6171020c527c662
8bc2dabbd3/merged","io.kubernetes.cri-o.Name":"k8s_etcd-kubernetes-upgrade-366203_kube-system_ca6cfbdd5dced9c86779bd997ff5a13d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/822b910c9fc5fcd888a881e7936f873f2fbfc5220819a025fd5a5a7a4e21f674/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"822b910c9fc5fcd888a881e7936f873f2fbfc5220819a025fd5a5a7a4e21f674","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-366203_kube-system_ca6cfbdd5dced9c86779bd997ff5a13d_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/conta
iners/storage/overlay-containers/822b910c9fc5fcd888a881e7936f873f2fbfc5220819a025fd5a5a7a4e21f674/userdata/shm","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-366203","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ca6cfbdd5dced9c86779bd997ff5a13d","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.196:2379","kubernetes.io/config.hash":"ca6cfbdd5dced9c86779bd997ff5a13d","kubernetes.io/config.seen":"2024-05-20T14:22:01.128087480Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"84f5785cfbff819acbb8949002126e8b3f6ded0dd93fc225073d19e5a3c9e593","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/84f5785cfbff819acbb8949002126e8b3f6ded0dd93fc225073d19e5a3c9e593/userdata","rootfs":"/var/lib/containers/storage/overlay/07e948caaa730f4b739c433d8f0ab0aa04765e6ab24ed84232b3a91ed2454d0d/merged","created":"2024-05-20T14:22:01.6730765Z","annotations":{"component":"kube-apiserver","io.containe
r.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.196:8443\",\"kubernetes.io/config.seen\":\"2024-05-20T14:22:01.079802040Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"b13e3994210c45dc653c312b9c3d77c6\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podb13e3994210c45dc653c312b9c3d77c6","io.kubernetes.cri-o.ContainerID":"84f5785cfbff819acbb8949002126e8b3f6ded0dd93fc225073d19e5a3c9e593","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-kubernetes-upgrade-366203_kube-system_b13e3994210c45dc653c312b9c3d77c6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-05-20T14:22:01.572107862Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-366203","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/84f5785cfbff819acbb8949002126e8b3f6ded0dd93fc
225073d19e5a3c9e593/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-apiserver-kubernetes-upgrade-366203","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"b13e3994210c45dc653c312b9c3d77c6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-366203\",\"component\":\"kube-apiserver\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-366203_b13e3994210c45dc653c312b9c3d77c6/84f5785cfbff819acbb8949002126e8b3f6ded0dd93fc225073d19e5a3c9e593.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-kubernetes-upgrade-366203\",\"uid\":\"b13e3994210c45dc653c312b9c3d77c6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/07e948caaa730f4b739c433d8f0ab0aa04765e6ab24
ed84232b3a91ed2454d0d/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-kubernetes-upgrade-366203_kube-system_b13e3994210c45dc653c312b9c3d77c6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":256,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/84f5785cfbff819acbb8949002126e8b3f6ded0dd93fc225073d19e5a3c9e593/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"84f5785cfbff819acbb8949002126e8b3f6ded0dd93fc225073d19e5a3c9e593","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-366203_kube-system_b13e3994210c45dc653c312b9c3d77c6_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes
.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/84f5785cfbff819acbb8949002126e8b3f6ded0dd93fc225073d19e5a3c9e593/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-366203","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b13e3994210c45dc653c312b9c3d77c6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.196:8443","kubernetes.io/config.hash":"b13e3994210c45dc653c312b9c3d77c6","kubernetes.io/config.seen":"2024-05-20T14:22:01.079802040Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"87b11a0755817d1fd2aa02f3007796a12ca9f1e8660c7210e102d50603de6a48","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/87b11a0755817d1fd2aa02f3007796a12ca9f1e8660c7210e102d50603de6a48/userdata","rootfs":"/var/lib/containers/storage/overlay/677c719f0e3c9de9cd49afbbc62aab9aae692b5152cf221031408050862577da/merged","created":"2024-05-20T14:22:16.693176051Z","annot
ations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ac6c6b5e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ac6c6b5e\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"87b11a0755817d1fd2aa02f3007796a12ca9f1e8660c7210e102d50603de6a48","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-05-20T14:22:16.573724069Z","io.kubernetes.cri-o.Image":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.30.1","i
o.kubernetes.cri-o.ImageRef":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-366203\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8c6071bc1bd5a875e6f04e528de940b6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-366203_8c6071bc1bd5a875e6f04e528de940b6/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/677c719f0e3c9de9cd49afbbc62aab9aae692b5152cf221031408050862577da/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-366203_kube-system_8c6071bc1bd5a875e6f04e528de940b6_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/
containers/storage/overlay-containers/df785746414239d37f6a0af28f7d05a0031fb89849546429d0bbef0c8820dc37/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"df785746414239d37f6a0af28f7d05a0031fb89849546429d0bbef0c8820dc37","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-366203_kube-system_8c6071bc1bd5a875e6f04e528de940b6_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8c6071bc1bd5a875e6f04e528de940b6/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8c6071bc1bd5a875e6f04e528de940b6/containers/kube-controller-manager/15c5fd3d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"
/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-366203","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8c6071bc1bd5a875e6f04e528de940b6","kubern
etes.io/config.hash":"8c6071bc1bd5a875e6f04e528de940b6","kubernetes.io/config.seen":"2024-05-20T14:22:01.079806375Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8adb1ee817338e647f4b83fa8894062398467e838e30bd5e50bebac0084d14a4","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8adb1ee817338e647f4b83fa8894062398467e838e30bd5e50bebac0084d14a4/userdata","rootfs":"/var/lib/containers/storage/overlay/92b552d5dc0f4807f8934cea831a1cc9423e8a7d764ae97d5ae03bd8ddf5c340/merged","created":"2024-05-20T14:22:01.788659184Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"200064a4","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"200064a4\",\"io.kubernetes.container.restartCount\":\"0\",
\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8adb1ee817338e647f4b83fa8894062398467e838e30bd5e50bebac0084d14a4","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-05-20T14:22:01.712941097Z","io.kubernetes.cri-o.Image":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.30.1","io.kubernetes.cri-o.ImageRef":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-366203\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7fd1e6da689a7cf27065fb5956e3f8ea\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upg
rade-366203_7fd1e6da689a7cf27065fb5956e3f8ea/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/92b552d5dc0f4807f8934cea831a1cc9423e8a7d764ae97d5ae03bd8ddf5c340/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/10d60f77ed1405f3976e003ce1801309c4fb6407961bba9f2ee81b67a16588b9/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"10d60f77ed1405f3976e003ce1801309c4fb6407961bba9f2ee81b67a16588b9","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kub
ernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7fd1e6da689a7cf27065fb5956e3f8ea/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7fd1e6da689a7cf27065fb5956e3f8ea/containers/kube-scheduler/af6b27ea\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-366203","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7fd1e6da689a7cf27065fb5956e3f8ea","kubernetes.io/config.hash":"7fd1e6da689a7cf27065fb5956e3f8ea","kubernetes.io/config.seen":"2024-05-20T14:22:01.079807591Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a2
1456ead091b4c868300f2204e364a86da39d110e0d2e7711875a60adda4e52","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a21456ead091b4c868300f2204e364a86da39d110e0d2e7711875a60adda4e52/userdata","rootfs":"/var/lib/containers/storage/overlay/32803131d0be2c79d37f168834fa2d2c35c1b7c122ffa713e27b14bb97a0b143/merged","created":"2024-05-20T14:22:01.64738275Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"8c6071bc1bd5a875e6f04e528de940b6\",\"kubernetes.io/config.seen\":\"2024-05-20T14:22:01.079806375Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod8c6071bc1bd5a875e6f04e528de940b6","io.kubernetes.cri-o.ContainerID":"a21456ead091b4c868300f2204e364a86da39d110e0d2e7711875a60adda4e52","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-kubernetes-upgrade-366203_kube-system_8c6071bc1bd
5a875e6f04e528de940b6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-05-20T14:22:01.532977365Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-366203","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/a21456ead091b4c868300f2204e364a86da39d110e0d2e7711875a60adda4e52/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-controller-manager-kubernetes-upgrade-366203","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-366203\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"8c6071bc1bd5a875e6f04e528de940b6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernet
es-upgrade-366203_8c6071bc1bd5a875e6f04e528de940b6/a21456ead091b4c868300f2204e364a86da39d110e0d2e7711875a60adda4e52.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-kubernetes-upgrade-366203\",\"uid\":\"8c6071bc1bd5a875e6f04e528de940b6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/32803131d0be2c79d37f168834fa2d2c35c1b7c122ffa713e27b14bb97a0b143/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-kubernetes-upgrade-366203_kube-system_8c6071bc1bd5a875e6f04e528de940b6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":204,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/a21456
ead091b4c868300f2204e364a86da39d110e0d2e7711875a60adda4e52/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"a21456ead091b4c868300f2204e364a86da39d110e0d2e7711875a60adda4e52","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-366203_kube-system_8c6071bc1bd5a875e6f04e528de940b6_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/a21456ead091b4c868300f2204e364a86da39d110e0d2e7711875a60adda4e52/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-366203","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"8c6071bc1bd5a875e6f04e528de940b6","kubernetes.io/config.hash":"8c6071bc1bd5a875e6f04e528de940b6","kubernetes.io/config.seen":"2024-05-20T14:22:01.079806375Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bc1cf18bb1df08498c48818b03a574a97eef476eb81
f9983c09ec99af2045482","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/bc1cf18bb1df08498c48818b03a574a97eef476eb81f9983c09ec99af2045482/userdata","rootfs":"/var/lib/containers/storage/overlay/662da1cdbd3b65189f90fc28f723c1320931e8680e618bc30bd2d31de1428bdd/merged","created":"2024-05-20T14:22:02.003308235Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"98fa159","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"98fa159\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"bc1cf18bb1df08498c48818b03a574a
97eef476eb81f9983c09ec99af2045482","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-05-20T14:22:01.859307566Z","io.kubernetes.cri-o.Image":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.12-0","io.kubernetes.cri-o.ImageRef":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-366203\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ca6cfbdd5dced9c86779bd997ff5a13d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-366203_ca6cfbdd5dced9c86779bd997ff5a13d/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/662da1cdbd3b65189f90fc28f723c1320931e8680e618bc30bd2d31de1428bdd/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ku
bernetes-upgrade-366203_kube-system_ca6cfbdd5dced9c86779bd997ff5a13d_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/822b910c9fc5fcd888a881e7936f873f2fbfc5220819a025fd5a5a7a4e21f674/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"822b910c9fc5fcd888a881e7936f873f2fbfc5220819a025fd5a5a7a4e21f674","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-366203_kube-system_ca6cfbdd5dced9c86779bd997ff5a13d_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ca6cfbdd5dced9c86779bd997ff5a13d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ca6cfbdd5dced9c86779bd997ff5a13d/containers/etcd/1d8329f7
\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-366203","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ca6cfbdd5dced9c86779bd997ff5a13d","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.196:2379","kubernetes.io/config.hash":"ca6cfbdd5dced9c86779bd997ff5a13d","kubernetes.io/config.seen":"2024-05-20T14:22:01.128087480Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bcf9c906614aa18155b4374fd7b9307efc626db0394dff13da1025f9c8cd1658","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-cont
ainers/bcf9c906614aa18155b4374fd7b9307efc626db0394dff13da1025f9c8cd1658/userdata","rootfs":"/var/lib/containers/storage/overlay/eb25e8c1f1cd8f0e1729d055de0b837df1c998726a9e2552d351bb62d5b86baa/merged","created":"2024-05-20T14:22:16.580992793Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"62f90dcc","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"62f90dcc\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"bcf9c906614aa18155b4374fd7b9307efc626db0394dff13da1025f9c8cd1658","io.kubernetes.cri-o.ContainerType":"container","
io.kubernetes.cri-o.Created":"2024-05-20T14:22:16.524459702Z","io.kubernetes.cri-o.Image":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.30.1","io.kubernetes.cri-o.ImageRef":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-366203\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b13e3994210c45dc653c312b9c3d77c6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-366203_b13e3994210c45dc653c312b9c3d77c6/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/eb25e8c1f1cd8f0e1729d055de0b837df1c998726a9e2552d351bb62d5b86baa/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-api
server-kubernetes-upgrade-366203_kube-system_b13e3994210c45dc653c312b9c3d77c6_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c1a594941c27ee6a0028681193881d25b10d34762e5f5296ebf142724d5c0cdb/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c1a594941c27ee6a0028681193881d25b10d34762e5f5296ebf142724d5c0cdb","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-366203_kube-system_b13e3994210c45dc653c312b9c3d77c6_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b13e3994210c45dc653c312b9c3d77c6/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b13e3994210c45dc653c312b9c3d77c6/conta
iners/kube-apiserver/c328a8d6\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-366203","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b13e3994210c45dc653c312b9c3d77c6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.196:8443","kubernetes.io/config.hash":"b13e3994210c45dc653c312b9c3d77c6","kubernetes.io/config.seen":"2024-05-20T14:22:01.079802040Z","kubernetes.io/config.source":"file"},"owner":"r
oot"},{"ociVersion":"1.0.2-dev","id":"c1a594941c27ee6a0028681193881d25b10d34762e5f5296ebf142724d5c0cdb","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/c1a594941c27ee6a0028681193881d25b10d34762e5f5296ebf142724d5c0cdb/userdata","rootfs":"/var/lib/containers/storage/overlay/f1628d1b735765269e905cbdad8d000a854dc744310c4bb3cd9f70bbcbc18f5c/merged","created":"2024-05-20T14:22:16.38878522Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"b13e3994210c45dc653c312b9c3d77c6\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.196:8443\",\"kubernetes.io/config.seen\":\"2024-05-20T14:22:01.079802040Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podb13e3994210c45dc653c312b9c3d77c6","io.kubernetes.cri-o.ContainerID":"c1a594941c27ee6a0028681193881d25b10d34762e5f5296ebf142724d5c0
cdb","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-kubernetes-upgrade-366203_kube-system_b13e3994210c45dc653c312b9c3d77c6_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-05-20T14:22:16.308661247Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-366203","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/c1a594941c27ee6a0028681193881d25b10d34762e5f5296ebf142724d5c0cdb/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-apiserver-kubernetes-upgrade-366203","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-366203\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"b13e3994210c45dc653c312b9c3d77c6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","i
o.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-366203_b13e3994210c45dc653c312b9c3d77c6/c1a594941c27ee6a0028681193881d25b10d34762e5f5296ebf142724d5c0cdb.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-kubernetes-upgrade-366203\",\"uid\":\"b13e3994210c45dc653c312b9c3d77c6\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f1628d1b735765269e905cbdad8d000a854dc744310c4bb3cd9f70bbcbc18f5c/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-kubernetes-upgrade-366203_kube-system_b13e3994210c45dc653c312b9c3d77c6_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":256,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernete
s.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c1a594941c27ee6a0028681193881d25b10d34762e5f5296ebf142724d5c0cdb/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"c1a594941c27ee6a0028681193881d25b10d34762e5f5296ebf142724d5c0cdb","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-366203_kube-system_b13e3994210c45dc653c312b9c3d77c6_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/c1a594941c27ee6a0028681193881d25b10d34762e5f5296ebf142724d5c0cdb/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-366203","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b13e3994210c45dc653c312b9c3d77c6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.196:8443","kubernetes.io/config.hash":"b13e3994210c45dc653c312b9c3d77c6","kubernetes.io/config.seen":"2024-05-20T14:22:01.079802040Z","kuberne
tes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c51ec6f170f18edf00453204cd488fdd7b72a96978a7435d3b06cf61199767a9","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/c51ec6f170f18edf00453204cd488fdd7b72a96978a7435d3b06cf61199767a9/userdata","rootfs":"/var/lib/containers/storage/overlay/112579a76b1d6f93e56b026f5dcda54d136b1c7a98e487301528c8dcc82e0fdc/merged","created":"2024-05-20T14:22:16.519488894Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"98fa159","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"98fa159\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationM
essagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c51ec6f170f18edf00453204cd488fdd7b72a96978a7435d3b06cf61199767a9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-05-20T14:22:16.418620001Z","io.kubernetes.cri-o.Image":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.12-0","io.kubernetes.cri-o.ImageRef":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-366203\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ca6cfbdd5dced9c86779bd997ff5a13d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-366203_ca6cfbdd5dced9c86779bd997ff5a13d/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoi
nt":"/var/lib/containers/storage/overlay/112579a76b1d6f93e56b026f5dcda54d136b1c7a98e487301528c8dcc82e0fdc/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-kubernetes-upgrade-366203_kube-system_ca6cfbdd5dced9c86779bd997ff5a13d_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/4c34637aae4b61cbc1810cc1ca278214783acda6eb06981330389819712a0b49/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4c34637aae4b61cbc1810cc1ca278214783acda6eb06981330389819712a0b49","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-366203_kube-system_ca6cfbdd5dced9c86779bd997ff5a13d_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ca6cfbdd5dced9c86779bd997ff5a13d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux
_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ca6cfbdd5dced9c86779bd997ff5a13d/containers/etcd/1b84166f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-366203","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ca6cfbdd5dced9c86779bd997ff5a13d","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.196:2379","kubernetes.io/config.hash":"ca6cfbdd5dced9c86779bd997ff5a13d","kubernetes.io/config.seen":"2024-05-20T14:22:01.128087480Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion"
:"1.0.2-dev","id":"df785746414239d37f6a0af28f7d05a0031fb89849546429d0bbef0c8820dc37","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/df785746414239d37f6a0af28f7d05a0031fb89849546429d0bbef0c8820dc37/userdata","rootfs":"/var/lib/containers/storage/overlay/4cde2d7eeff9b4dcc8b782d46cfa0c2f74a6a9c99e057ff323437f85803b1b39/merged","created":"2024-05-20T14:22:16.40749962Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-05-20T14:22:01.079806375Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"8c6071bc1bd5a875e6f04e528de940b6\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod8c6071bc1bd5a875e6f04e528de940b6","io.kubernetes.cri-o.ContainerID":"df785746414239d37f6a0af28f7d05a0031fb89849546429d0bbef0c8820dc37","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-kubernetes-upgrade-366203_ku
be-system_8c6071bc1bd5a875e6f04e528de940b6_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-05-20T14:22:16.270608464Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-366203","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/df785746414239d37f6a0af28f7d05a0031fb89849546429d0bbef0c8820dc37/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-controller-manager-kubernetes-upgrade-366203","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"8c6071bc1bd5a875e6f04e528de940b6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-366203\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-contro
ller-manager-kubernetes-upgrade-366203_8c6071bc1bd5a875e6f04e528de940b6/df785746414239d37f6a0af28f7d05a0031fb89849546429d0bbef0c8820dc37.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-kubernetes-upgrade-366203\",\"uid\":\"8c6071bc1bd5a875e6f04e528de940b6\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4cde2d7eeff9b4dcc8b782d46cfa0c2f74a6a9c99e057ff323437f85803b1b39/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-kubernetes-upgrade-366203_kube-system_8c6071bc1bd5a875e6f04e528de940b6_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":204,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/container
s/storage/overlay-containers/df785746414239d37f6a0af28f7d05a0031fb89849546429d0bbef0c8820dc37/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"df785746414239d37f6a0af28f7d05a0031fb89849546429d0bbef0c8820dc37","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-366203_kube-system_8c6071bc1bd5a875e6f04e528de940b6_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/df785746414239d37f6a0af28f7d05a0031fb89849546429d0bbef0c8820dc37/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-366203","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"8c6071bc1bd5a875e6f04e528de940b6","kubernetes.io/config.hash":"8c6071bc1bd5a875e6f04e528de940b6","kubernetes.io/config.seen":"2024-05-20T14:22:01.079806375Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"}]
	I0520 14:23:49.172808  656347 cri.go:126] list returned 14 containers
	I0520 14:23:49.172839  656347 cri.go:129] container: {ID:0d45eac5013431220e88af234c78475417fbd88bed9d0a3eada126a94f3e2d52 Status:stopped}
	I0520 14:23:49.172879  656347 cri.go:135] skipping {0d45eac5013431220e88af234c78475417fbd88bed9d0a3eada126a94f3e2d52 stopped}: state = "stopped", want "paused"
	I0520 14:23:49.172893  656347 cri.go:129] container: {ID:10d60f77ed1405f3976e003ce1801309c4fb6407961bba9f2ee81b67a16588b9 Status:stopped}
	I0520 14:23:49.172901  656347 cri.go:131] skipping 10d60f77ed1405f3976e003ce1801309c4fb6407961bba9f2ee81b67a16588b9 - not in ps
	I0520 14:23:49.172908  656347 cri.go:129] container: {ID:4c34637aae4b61cbc1810cc1ca278214783acda6eb06981330389819712a0b49 Status:stopped}
	I0520 14:23:49.172915  656347 cri.go:131] skipping 4c34637aae4b61cbc1810cc1ca278214783acda6eb06981330389819712a0b49 - not in ps
	I0520 14:23:49.172919  656347 cri.go:129] container: {ID:699c6b78db9b53748d29e1c66fbd61044e5383268c92a7fb83736dc4bb71da8e Status:stopped}
	I0520 14:23:49.172927  656347 cri.go:135] skipping {699c6b78db9b53748d29e1c66fbd61044e5383268c92a7fb83736dc4bb71da8e stopped}: state = "stopped", want "paused"
	I0520 14:23:49.172933  656347 cri.go:129] container: {ID:822b910c9fc5fcd888a881e7936f873f2fbfc5220819a025fd5a5a7a4e21f674 Status:stopped}
	I0520 14:23:49.172940  656347 cri.go:131] skipping 822b910c9fc5fcd888a881e7936f873f2fbfc5220819a025fd5a5a7a4e21f674 - not in ps
	I0520 14:23:49.172945  656347 cri.go:129] container: {ID:84f5785cfbff819acbb8949002126e8b3f6ded0dd93fc225073d19e5a3c9e593 Status:stopped}
	I0520 14:23:49.172951  656347 cri.go:131] skipping 84f5785cfbff819acbb8949002126e8b3f6ded0dd93fc225073d19e5a3c9e593 - not in ps
	I0520 14:23:49.172955  656347 cri.go:129] container: {ID:87b11a0755817d1fd2aa02f3007796a12ca9f1e8660c7210e102d50603de6a48 Status:stopped}
	I0520 14:23:49.172962  656347 cri.go:135] skipping {87b11a0755817d1fd2aa02f3007796a12ca9f1e8660c7210e102d50603de6a48 stopped}: state = "stopped", want "paused"
	I0520 14:23:49.172967  656347 cri.go:129] container: {ID:8adb1ee817338e647f4b83fa8894062398467e838e30bd5e50bebac0084d14a4 Status:stopped}
	I0520 14:23:49.172981  656347 cri.go:135] skipping {8adb1ee817338e647f4b83fa8894062398467e838e30bd5e50bebac0084d14a4 stopped}: state = "stopped", want "paused"
	I0520 14:23:49.172986  656347 cri.go:129] container: {ID:a21456ead091b4c868300f2204e364a86da39d110e0d2e7711875a60adda4e52 Status:stopped}
	I0520 14:23:49.172997  656347 cri.go:131] skipping a21456ead091b4c868300f2204e364a86da39d110e0d2e7711875a60adda4e52 - not in ps
	I0520 14:23:49.173002  656347 cri.go:129] container: {ID:bc1cf18bb1df08498c48818b03a574a97eef476eb81f9983c09ec99af2045482 Status:stopped}
	I0520 14:23:49.173008  656347 cri.go:135] skipping {bc1cf18bb1df08498c48818b03a574a97eef476eb81f9983c09ec99af2045482 stopped}: state = "stopped", want "paused"
	I0520 14:23:49.173013  656347 cri.go:129] container: {ID:bcf9c906614aa18155b4374fd7b9307efc626db0394dff13da1025f9c8cd1658 Status:stopped}
	I0520 14:23:49.173020  656347 cri.go:135] skipping {bcf9c906614aa18155b4374fd7b9307efc626db0394dff13da1025f9c8cd1658 stopped}: state = "stopped", want "paused"
	I0520 14:23:49.173025  656347 cri.go:129] container: {ID:c1a594941c27ee6a0028681193881d25b10d34762e5f5296ebf142724d5c0cdb Status:stopped}
	I0520 14:23:49.173032  656347 cri.go:131] skipping c1a594941c27ee6a0028681193881d25b10d34762e5f5296ebf142724d5c0cdb - not in ps
	I0520 14:23:49.173037  656347 cri.go:129] container: {ID:c51ec6f170f18edf00453204cd488fdd7b72a96978a7435d3b06cf61199767a9 Status:stopped}
	I0520 14:23:49.173045  656347 cri.go:135] skipping {c51ec6f170f18edf00453204cd488fdd7b72a96978a7435d3b06cf61199767a9 stopped}: state = "stopped", want "paused"
	I0520 14:23:49.173051  656347 cri.go:129] container: {ID:df785746414239d37f6a0af28f7d05a0031fb89849546429d0bbef0c8820dc37 Status:stopped}
	I0520 14:23:49.173056  656347 cri.go:131] skipping df785746414239d37f6a0af28f7d05a0031fb89849546429d0bbef0c8820dc37 - not in ps
	I0520 14:23:49.173112  656347 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 14:23:49.185752  656347 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 14:23:49.185784  656347 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 14:23:49.185793  656347 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 14:23:49.185848  656347 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 14:23:49.196345  656347 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 14:23:49.197059  656347 kubeconfig.go:125] found "kubernetes-upgrade-366203" server: "https://192.168.39.196:8443"
	I0520 14:23:49.197970  656347 kapi.go:59] client config for kubernetes-upgrade-366203: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/client.crt", KeyFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/client.key", CAFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 14:23:49.198693  656347 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 14:23:49.208092  656347 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.196
	I0520 14:23:49.208138  656347 kubeadm.go:1154] stopping kube-system containers ...
	I0520 14:23:49.208167  656347 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 14:23:49.208249  656347 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 14:23:49.244366  656347 cri.go:89] found id: "87b11a0755817d1fd2aa02f3007796a12ca9f1e8660c7210e102d50603de6a48"
	I0520 14:23:49.244399  656347 cri.go:89] found id: "bcf9c906614aa18155b4374fd7b9307efc626db0394dff13da1025f9c8cd1658"
	I0520 14:23:49.244406  656347 cri.go:89] found id: "c51ec6f170f18edf00453204cd488fdd7b72a96978a7435d3b06cf61199767a9"
	I0520 14:23:49.244411  656347 cri.go:89] found id: "bc1cf18bb1df08498c48818b03a574a97eef476eb81f9983c09ec99af2045482"
	I0520 14:23:49.244416  656347 cri.go:89] found id: "0d45eac5013431220e88af234c78475417fbd88bed9d0a3eada126a94f3e2d52"
	I0520 14:23:49.244420  656347 cri.go:89] found id: "699c6b78db9b53748d29e1c66fbd61044e5383268c92a7fb83736dc4bb71da8e"
	I0520 14:23:49.244423  656347 cri.go:89] found id: "8adb1ee817338e647f4b83fa8894062398467e838e30bd5e50bebac0084d14a4"
	I0520 14:23:49.244427  656347 cri.go:89] found id: ""
	I0520 14:23:49.244437  656347 cri.go:234] Stopping containers: [87b11a0755817d1fd2aa02f3007796a12ca9f1e8660c7210e102d50603de6a48 bcf9c906614aa18155b4374fd7b9307efc626db0394dff13da1025f9c8cd1658 c51ec6f170f18edf00453204cd488fdd7b72a96978a7435d3b06cf61199767a9 bc1cf18bb1df08498c48818b03a574a97eef476eb81f9983c09ec99af2045482 0d45eac5013431220e88af234c78475417fbd88bed9d0a3eada126a94f3e2d52 699c6b78db9b53748d29e1c66fbd61044e5383268c92a7fb83736dc4bb71da8e 8adb1ee817338e647f4b83fa8894062398467e838e30bd5e50bebac0084d14a4]
	I0520 14:23:49.244508  656347 ssh_runner.go:195] Run: which crictl
	I0520 14:23:49.248886  656347 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 87b11a0755817d1fd2aa02f3007796a12ca9f1e8660c7210e102d50603de6a48 bcf9c906614aa18155b4374fd7b9307efc626db0394dff13da1025f9c8cd1658 c51ec6f170f18edf00453204cd488fdd7b72a96978a7435d3b06cf61199767a9 bc1cf18bb1df08498c48818b03a574a97eef476eb81f9983c09ec99af2045482 0d45eac5013431220e88af234c78475417fbd88bed9d0a3eada126a94f3e2d52 699c6b78db9b53748d29e1c66fbd61044e5383268c92a7fb83736dc4bb71da8e 8adb1ee817338e647f4b83fa8894062398467e838e30bd5e50bebac0084d14a4
	I0520 14:23:49.344539  656347 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 14:23:49.389351  656347 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 14:23:49.399979  656347 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 May 20 14:21 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 May 20 14:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5759 May 20 14:21 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 May 20 14:22 /etc/kubernetes/scheduler.conf
	
	I0520 14:23:49.400060  656347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 14:23:49.409150  656347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 14:23:49.418130  656347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 14:23:49.426750  656347 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 14:23:49.426806  656347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 14:23:49.436637  656347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 14:23:49.445684  656347 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 14:23:49.445751  656347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 14:23:49.455355  656347 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 14:23:49.464768  656347 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 14:23:49.527849  656347 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 14:23:51.086284  656347 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.558396525s)
	I0520 14:23:51.086315  656347 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 14:23:51.311136  656347 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 14:23:51.392370  656347 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 14:23:51.468245  656347 api_server.go:52] waiting for apiserver process to appear ...
	I0520 14:23:51.468345  656347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 14:23:51.969334  656347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 14:23:52.469012  656347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 14:23:52.490585  656347 api_server.go:72] duration metric: took 1.022337234s to wait for apiserver process to appear ...
	I0520 14:23:52.490616  656347 api_server.go:88] waiting for apiserver healthz status ...
	I0520 14:23:52.490641  656347 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0520 14:23:53.564804  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:54.064078  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:54.565031  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:55.064311  658162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 14:23:55.228799  658162 kubeadm.go:1107] duration metric: took 11.282965667s to wait for elevateKubeSystemPrivileges
	W0520 14:23:55.228845  658162 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 14:23:55.228856  658162 kubeadm.go:393] duration metric: took 23.597646064s to StartCluster
	I0520 14:23:55.228877  658162 settings.go:142] acquiring lock: {Name:mkfd2b5ade573ba8a1562334a87bc71c9f86de47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:23:55.228964  658162 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 14:23:55.230645  658162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/kubeconfig: {Name:mkf2ddc28a03db07b0a9823212c8450f3ae4cfa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:23:55.230919  658162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 14:23:55.230934  658162 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 14:23:55.233633  658162 out.go:177] * Verifying Kubernetes components...
	I0520 14:23:55.230996  658162 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 14:23:55.231152  658162 config.go:182] Loaded profile config "calico-862860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:23:55.235856  658162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:23:55.235933  658162 addons.go:69] Setting storage-provisioner=true in profile "calico-862860"
	I0520 14:23:55.235993  658162 addons.go:234] Setting addon storage-provisioner=true in "calico-862860"
	I0520 14:23:55.236030  658162 host.go:66] Checking if "calico-862860" exists ...
	I0520 14:23:55.236162  658162 addons.go:69] Setting default-storageclass=true in profile "calico-862860"
	I0520 14:23:55.236206  658162 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-862860"
	I0520 14:23:55.236493  658162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:23:55.236519  658162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:23:55.236554  658162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:23:55.236574  658162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:23:55.258654  658162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38805
	I0520 14:23:55.258921  658162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I0520 14:23:55.259218  658162 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:23:55.259429  658162 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:23:55.259737  658162 main.go:141] libmachine: Using API Version  1
	I0520 14:23:55.259757  658162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:23:55.260217  658162 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:23:55.260361  658162 main.go:141] libmachine: Using API Version  1
	I0520 14:23:55.260382  658162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:23:55.260751  658162 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:23:55.260776  658162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:23:55.260798  658162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:23:55.261056  658162 main.go:141] libmachine: (calico-862860) Calling .GetState
	I0520 14:23:55.265171  658162 addons.go:234] Setting addon default-storageclass=true in "calico-862860"
	I0520 14:23:55.265219  658162 host.go:66] Checking if "calico-862860" exists ...
	I0520 14:23:55.265676  658162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:23:55.265705  658162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:23:55.283493  658162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43435
	I0520 14:23:55.283903  658162 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:23:55.284530  658162 main.go:141] libmachine: Using API Version  1
	I0520 14:23:55.284559  658162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:23:55.284966  658162 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:23:55.285197  658162 main.go:141] libmachine: (calico-862860) Calling .GetState
	I0520 14:23:55.288032  658162 main.go:141] libmachine: (calico-862860) Calling .DriverName
	I0520 14:23:55.290711  658162 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 14:23:51.752917  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | domain custom-flannel-862860 has defined MAC address 52:54:00:69:06:5c in network mk-custom-flannel-862860
	I0520 14:23:51.753415  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | unable to find current IP address of domain custom-flannel-862860 in network mk-custom-flannel-862860
	I0520 14:23:51.753445  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:51.753366  658839 retry.go:31] will retry after 3.723274605s: waiting for machine to come up
	I0520 14:23:55.478350  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | domain custom-flannel-862860 has defined MAC address 52:54:00:69:06:5c in network mk-custom-flannel-862860
	I0520 14:23:55.478937  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | unable to find current IP address of domain custom-flannel-862860 in network mk-custom-flannel-862860
	I0520 14:23:55.478959  658816 main.go:141] libmachine: (custom-flannel-862860) DBG | I0520 14:23:55.478876  658839 retry.go:31] will retry after 3.43781008s: waiting for machine to come up
	I0520 14:23:55.288559  658162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39027
	I0520 14:23:55.292938  658162 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 14:23:55.292951  658162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 14:23:55.292966  658162 main.go:141] libmachine: (calico-862860) Calling .GetSSHHostname
	I0520 14:23:55.293330  658162 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:23:55.294207  658162 main.go:141] libmachine: Using API Version  1
	I0520 14:23:55.294228  658162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:23:55.294813  658162 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:23:55.295893  658162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:23:55.295939  658162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:23:55.296470  658162 main.go:141] libmachine: (calico-862860) DBG | domain calico-862860 has defined MAC address 52:54:00:05:4d:1c in network mk-calico-862860
	I0520 14:23:55.296859  658162 main.go:141] libmachine: (calico-862860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:4d:1c", ip: ""} in network mk-calico-862860: {Iface:virbr3 ExpiryTime:2024-05-20 15:23:15 +0000 UTC Type:0 Mac:52:54:00:05:4d:1c Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:calico-862860 Clientid:01:52:54:00:05:4d:1c}
	I0520 14:23:55.296891  658162 main.go:141] libmachine: (calico-862860) DBG | domain calico-862860 has defined IP address 192.168.61.136 and MAC address 52:54:00:05:4d:1c in network mk-calico-862860
	I0520 14:23:55.297108  658162 main.go:141] libmachine: (calico-862860) Calling .GetSSHPort
	I0520 14:23:55.297323  658162 main.go:141] libmachine: (calico-862860) Calling .GetSSHKeyPath
	I0520 14:23:55.297470  658162 main.go:141] libmachine: (calico-862860) Calling .GetSSHUsername
	I0520 14:23:55.297616  658162 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/calico-862860/id_rsa Username:docker}
	I0520 14:23:55.318744  658162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44307
	I0520 14:23:55.319464  658162 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:23:55.320190  658162 main.go:141] libmachine: Using API Version  1
	I0520 14:23:55.320208  658162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:23:55.320707  658162 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:23:55.321077  658162 main.go:141] libmachine: (calico-862860) Calling .GetState
	I0520 14:23:55.323089  658162 main.go:141] libmachine: (calico-862860) Calling .DriverName
	I0520 14:23:55.323376  658162 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 14:23:55.323394  658162 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 14:23:55.323417  658162 main.go:141] libmachine: (calico-862860) Calling .GetSSHHostname
	I0520 14:23:55.326587  658162 main.go:141] libmachine: (calico-862860) DBG | domain calico-862860 has defined MAC address 52:54:00:05:4d:1c in network mk-calico-862860
	I0520 14:23:55.326947  658162 main.go:141] libmachine: (calico-862860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:4d:1c", ip: ""} in network mk-calico-862860: {Iface:virbr3 ExpiryTime:2024-05-20 15:23:15 +0000 UTC Type:0 Mac:52:54:00:05:4d:1c Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:calico-862860 Clientid:01:52:54:00:05:4d:1c}
	I0520 14:23:55.326974  658162 main.go:141] libmachine: (calico-862860) DBG | domain calico-862860 has defined IP address 192.168.61.136 and MAC address 52:54:00:05:4d:1c in network mk-calico-862860
	I0520 14:23:55.327137  658162 main.go:141] libmachine: (calico-862860) Calling .GetSSHPort
	I0520 14:23:55.327330  658162 main.go:141] libmachine: (calico-862860) Calling .GetSSHKeyPath
	I0520 14:23:55.327476  658162 main.go:141] libmachine: (calico-862860) Calling .GetSSHUsername
	I0520 14:23:55.327597  658162 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/calico-862860/id_rsa Username:docker}
	I0520 14:23:55.583993  658162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 14:23:55.584160  658162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 14:23:55.599145  658162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 14:23:55.599176  658162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 14:23:56.053192  658162 main.go:141] libmachine: Making call to close driver server
	I0520 14:23:56.053228  658162 main.go:141] libmachine: (calico-862860) Calling .Close
	I0520 14:23:56.053661  658162 main.go:141] libmachine: (calico-862860) DBG | Closing plugin on server side
	I0520 14:23:56.054875  658162 main.go:141] libmachine: Successfully made call to close driver server
	I0520 14:23:56.054891  658162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 14:23:56.054911  658162 main.go:141] libmachine: Making call to close driver server
	I0520 14:23:56.054919  658162 main.go:141] libmachine: (calico-862860) Calling .Close
	I0520 14:23:56.055224  658162 main.go:141] libmachine: Successfully made call to close driver server
	I0520 14:23:56.055244  658162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 14:23:56.194824  658162 main.go:141] libmachine: Making call to close driver server
	I0520 14:23:56.194859  658162 main.go:141] libmachine: (calico-862860) Calling .Close
	I0520 14:23:56.195199  658162 main.go:141] libmachine: Successfully made call to close driver server
	I0520 14:23:56.195228  658162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 14:23:56.195237  658162 main.go:141] libmachine: (calico-862860) DBG | Closing plugin on server side
	I0520 14:23:56.457663  658162 start.go:946] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0520 14:23:56.457845  658162 main.go:141] libmachine: Making call to close driver server
	I0520 14:23:56.457868  658162 main.go:141] libmachine: (calico-862860) Calling .Close
	I0520 14:23:56.458242  658162 main.go:141] libmachine: Successfully made call to close driver server
	I0520 14:23:56.458258  658162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 14:23:56.458266  658162 main.go:141] libmachine: Making call to close driver server
	I0520 14:23:56.458273  658162 main.go:141] libmachine: (calico-862860) Calling .Close
	I0520 14:23:56.458560  658162 main.go:141] libmachine: Successfully made call to close driver server
	I0520 14:23:56.458572  658162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 14:23:56.461229  658162 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0520 14:23:54.986574  656347 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 14:23:54.986605  656347 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 14:23:54.986623  656347 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0520 14:23:55.051870  656347 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 14:23:55.051902  656347 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 14:23:55.051916  656347 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0520 14:23:55.075102  656347 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 14:23:55.075141  656347 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 14:23:55.490696  656347 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0520 14:23:55.495590  656347 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 14:23:55.495625  656347 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 14:23:55.991176  656347 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0520 14:23:55.995808  656347 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 14:23:55.995840  656347 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 14:23:56.491421  656347 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0520 14:23:56.496547  656347 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0520 14:23:56.504108  656347 api_server.go:141] control plane version: v1.30.1
	I0520 14:23:56.504140  656347 api_server.go:131] duration metric: took 4.013515912s to wait for apiserver health ...
	I0520 14:23:56.504152  656347 cni.go:84] Creating CNI manager for ""
	I0520 14:23:56.504160  656347 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 14:23:56.506670  656347 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 14:23:56.508873  656347 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 14:23:56.525709  656347 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 14:23:56.547414  656347 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 14:23:56.556899  656347 system_pods.go:59] 5 kube-system pods found
	I0520 14:23:56.556936  656347 system_pods.go:61] "etcd-kubernetes-upgrade-366203" [c7db9a3c-ea22-4086-82e6-8fd31e5daf93] Running
	I0520 14:23:56.556944  656347 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-366203" [e2c25c32-40a4-4800-87ed-895d38f5edec] Running
	I0520 14:23:56.556954  656347 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-366203" [ba666cd1-c368-41fd-9a61-322f0fbabd6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 14:23:56.556961  656347 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-366203" [5f9fa12f-c44e-46ca-8423-b5cf14c5a916] Running
	I0520 14:23:56.556970  656347 system_pods.go:61] "storage-provisioner" [09104fad-ce88-4376-a24c-88ad5c9bfad0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0520 14:23:56.556977  656347 system_pods.go:74] duration metric: took 9.543657ms to wait for pod list to return data ...
	I0520 14:23:56.556989  656347 node_conditions.go:102] verifying NodePressure condition ...
	I0520 14:23:56.560689  656347 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 14:23:56.560718  656347 node_conditions.go:123] node cpu capacity is 2
	I0520 14:23:56.560732  656347 node_conditions.go:105] duration metric: took 3.73688ms to run NodePressure ...
	I0520 14:23:56.560754  656347 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 14:23:56.994951  656347 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 14:23:57.015175  656347 ops.go:34] apiserver oom_adj: -16
	I0520 14:23:57.015197  656347 kubeadm.go:591] duration metric: took 7.829397099s to restartPrimaryControlPlane
	I0520 14:23:57.015208  656347 kubeadm.go:393] duration metric: took 7.931801324s to StartCluster
	I0520 14:23:57.015236  656347 settings.go:142] acquiring lock: {Name:mkfd2b5ade573ba8a1562334a87bc71c9f86de47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:23:57.015303  656347 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 14:23:57.025153  656347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/kubeconfig: {Name:mkf2ddc28a03db07b0a9823212c8450f3ae4cfa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:23:57.026171  656347 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 14:23:57.028637  656347 out.go:177] * Verifying Kubernetes components...
	I0520 14:23:57.026464  656347 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 14:23:57.028688  656347 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-366203"
	I0520 14:23:57.028724  656347 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-366203"
	I0520 14:23:57.026583  656347 config.go:182] Loaded profile config "kubernetes-upgrade-366203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:23:57.028740  656347 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-366203"
	I0520 14:23:57.030824  656347 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-366203"
	W0520 14:23:57.028790  656347 addons.go:243] addon storage-provisioner should already be in state true
	I0520 14:23:57.030922  656347 host.go:66] Checking if "kubernetes-upgrade-366203" exists ...
	I0520 14:23:57.031247  656347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:23:57.031276  656347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:23:57.031292  656347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:23:57.031320  656347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:23:57.031520  656347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:23:57.050303  656347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39695
	I0520 14:23:57.050764  656347 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:23:57.051301  656347 main.go:141] libmachine: Using API Version  1
	I0520 14:23:57.051320  656347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:23:57.051723  656347 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:23:57.051932  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetState
	I0520 14:23:57.054664  656347 kapi.go:59] client config for kubernetes-upgrade-366203: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/client.crt", KeyFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/client.key", CAFile:"/home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 14:23:57.054923  656347 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-366203"
	W0520 14:23:57.054939  656347 addons.go:243] addon default-storageclass should already be in state true
	I0520 14:23:57.054969  656347 host.go:66] Checking if "kubernetes-upgrade-366203" exists ...
	I0520 14:23:57.055238  656347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:23:57.055254  656347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:23:57.055416  656347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45301
	I0520 14:23:57.055840  656347 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:23:57.056339  656347 main.go:141] libmachine: Using API Version  1
	I0520 14:23:57.056362  656347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:23:57.056733  656347 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:23:57.057372  656347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:23:57.057407  656347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:23:57.077594  656347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0520 14:23:57.078155  656347 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:23:57.078691  656347 main.go:141] libmachine: Using API Version  1
	I0520 14:23:57.078710  656347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:23:57.079253  656347 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:23:57.079440  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetState
	I0520 14:23:57.081206  656347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37171
	I0520 14:23:57.081324  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:23:57.084375  656347 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 14:23:57.082100  656347 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:23:57.086903  656347 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 14:23:57.086922  656347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 14:23:57.086945  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:23:57.087486  656347 main.go:141] libmachine: Using API Version  1
	I0520 14:23:57.087507  656347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:23:57.087908  656347 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:23:57.088454  656347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:23:57.088488  656347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:23:57.099409  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:23:57.100154  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:23:57.100177  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:23:57.100221  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:23:57.100445  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:23:57.100653  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:23:57.100838  656347 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa Username:docker}
	I0520 14:23:57.108209  656347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33655
	I0520 14:23:57.108742  656347 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:23:57.109256  656347 main.go:141] libmachine: Using API Version  1
	I0520 14:23:57.109278  656347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:23:57.109603  656347 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:23:57.109805  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetState
	I0520 14:23:57.111858  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:23:57.112087  656347 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 14:23:57.112104  656347 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 14:23:57.112125  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:23:57.115713  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:23:57.116328  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:23:57.116350  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:23:57.116553  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:23:57.116747  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:23:57.116937  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:23:57.117114  656347 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa Username:docker}
	I0520 14:23:57.272077  656347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 14:23:57.292754  656347 api_server.go:52] waiting for apiserver process to appear ...
	I0520 14:23:57.292836  656347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 14:23:57.306736  656347 api_server.go:72] duration metric: took 280.514434ms to wait for apiserver process to appear ...
	I0520 14:23:57.306766  656347 api_server.go:88] waiting for apiserver healthz status ...
	I0520 14:23:57.306798  656347 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0520 14:23:57.313020  656347 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0520 14:23:57.313999  656347 api_server.go:141] control plane version: v1.30.1
	I0520 14:23:57.314019  656347 api_server.go:131] duration metric: took 7.246367ms to wait for apiserver health ...
	I0520 14:23:57.314027  656347 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 14:23:57.319182  656347 system_pods.go:59] 5 kube-system pods found
	I0520 14:23:57.319206  656347 system_pods.go:61] "etcd-kubernetes-upgrade-366203" [c7db9a3c-ea22-4086-82e6-8fd31e5daf93] Running
	I0520 14:23:57.319212  656347 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-366203" [e2c25c32-40a4-4800-87ed-895d38f5edec] Running
	I0520 14:23:57.319220  656347 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-366203" [ba666cd1-c368-41fd-9a61-322f0fbabd6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 14:23:57.319230  656347 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-366203" [5f9fa12f-c44e-46ca-8423-b5cf14c5a916] Running
	I0520 14:23:57.319238  656347 system_pods.go:61] "storage-provisioner" [09104fad-ce88-4376-a24c-88ad5c9bfad0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0520 14:23:57.319244  656347 system_pods.go:74] duration metric: took 5.211089ms to wait for pod list to return data ...
	I0520 14:23:57.319257  656347 kubeadm.go:576] duration metric: took 293.042903ms to wait for: map[apiserver:true system_pods:true]
	I0520 14:23:57.319278  656347 node_conditions.go:102] verifying NodePressure condition ...
	I0520 14:23:57.322809  656347 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 14:23:57.322829  656347 node_conditions.go:123] node cpu capacity is 2
	I0520 14:23:57.322838  656347 node_conditions.go:105] duration metric: took 3.554632ms to run NodePressure ...
	I0520 14:23:57.322849  656347 start.go:240] waiting for startup goroutines ...
	I0520 14:23:57.404337  656347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 14:23:57.406525  656347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 14:23:58.106421  656347 main.go:141] libmachine: Making call to close driver server
	I0520 14:23:58.106452  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .Close
	I0520 14:23:58.106430  656347 main.go:141] libmachine: Making call to close driver server
	I0520 14:23:58.106540  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .Close
	I0520 14:23:58.106769  656347 main.go:141] libmachine: Successfully made call to close driver server
	I0520 14:23:58.106788  656347 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 14:23:58.106797  656347 main.go:141] libmachine: Making call to close driver server
	I0520 14:23:58.106804  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .Close
	I0520 14:23:58.106914  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | Closing plugin on server side
	I0520 14:23:58.106985  656347 main.go:141] libmachine: Successfully made call to close driver server
	I0520 14:23:58.107008  656347 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 14:23:58.107028  656347 main.go:141] libmachine: Making call to close driver server
	I0520 14:23:58.107037  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .Close
	I0520 14:23:58.107209  656347 main.go:141] libmachine: Successfully made call to close driver server
	I0520 14:23:58.107230  656347 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 14:23:58.107314  656347 main.go:141] libmachine: Successfully made call to close driver server
	I0520 14:23:58.107312  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | Closing plugin on server side
	I0520 14:23:58.107327  656347 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 14:23:58.114528  656347 main.go:141] libmachine: Making call to close driver server
	I0520 14:23:58.114551  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .Close
	I0520 14:23:58.114842  656347 main.go:141] libmachine: Successfully made call to close driver server
	I0520 14:23:58.114860  656347 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 14:23:58.118992  656347 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 14:23:56.459173  658162 node_ready.go:35] waiting up to 15m0s for node "calico-862860" to be "Ready" ...
	I0520 14:23:56.463551  658162 addons.go:505] duration metric: took 1.232551776s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0520 14:23:56.963548  658162 kapi.go:248] "coredns" deployment in "kube-system" namespace and "calico-862860" context rescaled to 1 replicas
	I0520 14:23:58.121674  656347 addons.go:505] duration metric: took 1.095220973s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 14:23:58.121734  656347 start.go:245] waiting for cluster config update ...
	I0520 14:23:58.121749  656347 start.go:254] writing updated cluster config ...
	I0520 14:23:58.122067  656347 ssh_runner.go:195] Run: rm -f paused
	I0520 14:23:58.177349  656347 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 14:23:58.180121  656347 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-366203" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 14:23:58 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:58.924899430Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716215038924865272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b980b82-8870-4b74-8f91-5c106fd170e5 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:23:58 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:58.925777135Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6babd731-ab88-4a07-98b1-f468755eba76 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:23:58 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:58.925846895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6babd731-ab88-4a07-98b1-f468755eba76 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:23:58 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:58.926025108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1880b8b0412d841dce355a4e11a153937110621726b0d566e17e0f531a89053e,PodSandboxId:add1a51a0b99e161f1cd1465e9e6d79ef7e12a8da4cc35b2693afb0196eb411d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716215032139647060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6cfbdd5dced9c86779bd997ff5a13d,},Annotations:map[string]string{io.kubernetes.container.hash: 98fa159,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fabfcf38760ba1ec46041b938a949787975d0c12519f3e5f3190978d5d952fa,PodSandboxId:a20ae6aa0e3ea349f27952b284e851cdb2b562931275c284796000fe06342efb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716215032170332440,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b13e3994210c45dc653c312b9c3d77c6,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f40bfcf56bc8a78fe229b17beaf9c21705c081a874930114c43e9f8d144a241f,PodSandboxId:eea20a169f42a47b4f5a57276342ea4b3457826e245ad655e321b9316e09f64c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716215032086024311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c6071bc1bd5a875e6f04e528de940b6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b11a0755817d1fd2aa02f3007796a12ca9f1e8660c7210e102d50603de6a48,PodSandboxId:df785746414239d37f6a0af28f7d05a0031fb89849546429d0bbef0c8820dc37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716214936573724069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c6071bc1bd5a875e6f04e528de940b6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf9c906614aa18155b4374fd7b9307efc626db0394dff13da1025f9c8cd1658,PodSandboxId:c1a594941c27ee6a0028681193881d25b10d34762e5f5296ebf142724d5c0cdb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716214936524459702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b13e3994210c45dc653c312b9c3d77c6,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51ec6f170f18edf00453204cd488fdd7b72a96978a7435d3b06cf61199767a9,PodSandboxId:4c34637aae4b61cbc1810cc1ca278214783acda6eb06981330389819712a0b49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716214936418620001,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6cfbdd5dced9c86779bd997ff5a13d,},Annotations:map[string]string{io.kubernetes.container.hash: 98fa159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8adb1ee817338e647f4b83fa8894062398467e838e30bd5e50bebac0084d14a4,PodSandboxId:10d60f77ed1405f3976e003ce1801309c4fb6407961bba9f2ee81b67a16588b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716214921712941097,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd1e6da689a7cf27065fb5956e3f8ea,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6babd731-ab88-4a07-98b1-f468755eba76 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:23:58 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:58.963791348Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff05fc3c-d6db-48d6-86d2-87444ec64803 name=/runtime.v1.RuntimeService/Version
	May 20 14:23:58 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:58.963913699Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff05fc3c-d6db-48d6-86d2-87444ec64803 name=/runtime.v1.RuntimeService/Version
	May 20 14:23:58 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:58.965584583Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c4c982b-6c78-4f8e-879c-0322d46d202b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:23:58 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:58.966118647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716215038966082754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c4c982b-6c78-4f8e-879c-0322d46d202b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:23:58 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:58.966902687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=915b5dbf-ce17-40f3-be2e-767f91fc91fb name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:23:58 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:58.966992547Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=915b5dbf-ce17-40f3-be2e-767f91fc91fb name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:23:58 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:58.967254087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1880b8b0412d841dce355a4e11a153937110621726b0d566e17e0f531a89053e,PodSandboxId:add1a51a0b99e161f1cd1465e9e6d79ef7e12a8da4cc35b2693afb0196eb411d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716215032139647060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6cfbdd5dced9c86779bd997ff5a13d,},Annotations:map[string]string{io.kubernetes.container.hash: 98fa159,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fabfcf38760ba1ec46041b938a949787975d0c12519f3e5f3190978d5d952fa,PodSandboxId:a20ae6aa0e3ea349f27952b284e851cdb2b562931275c284796000fe06342efb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716215032170332440,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b13e3994210c45dc653c312b9c3d77c6,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f40bfcf56bc8a78fe229b17beaf9c21705c081a874930114c43e9f8d144a241f,PodSandboxId:eea20a169f42a47b4f5a57276342ea4b3457826e245ad655e321b9316e09f64c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716215032086024311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c6071bc1bd5a875e6f04e528de940b6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b11a0755817d1fd2aa02f3007796a12ca9f1e8660c7210e102d50603de6a48,PodSandboxId:df785746414239d37f6a0af28f7d05a0031fb89849546429d0bbef0c8820dc37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716214936573724069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c6071bc1bd5a875e6f04e528de940b6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf9c906614aa18155b4374fd7b9307efc626db0394dff13da1025f9c8cd1658,PodSandboxId:c1a594941c27ee6a0028681193881d25b10d34762e5f5296ebf142724d5c0cdb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716214936524459702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b13e3994210c45dc653c312b9c3d77c6,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51ec6f170f18edf00453204cd488fdd7b72a96978a7435d3b06cf61199767a9,PodSandboxId:4c34637aae4b61cbc1810cc1ca278214783acda6eb06981330389819712a0b49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716214936418620001,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6cfbdd5dced9c86779bd997ff5a13d,},Annotations:map[string]string{io.kubernetes.container.hash: 98fa159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8adb1ee817338e647f4b83fa8894062398467e838e30bd5e50bebac0084d14a4,PodSandboxId:10d60f77ed1405f3976e003ce1801309c4fb6407961bba9f2ee81b67a16588b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716214921712941097,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd1e6da689a7cf27065fb5956e3f8ea,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=915b5dbf-ce17-40f3-be2e-767f91fc91fb name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:23:59 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:59.020453869Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ecd164f-fbdf-4df1-b7b1-233ff752baf0 name=/runtime.v1.RuntimeService/Version
	May 20 14:23:59 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:59.020548962Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ecd164f-fbdf-4df1-b7b1-233ff752baf0 name=/runtime.v1.RuntimeService/Version
	May 20 14:23:59 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:59.022305279Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9741db6a-3216-4097-8962-5b46836556a0 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:23:59 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:59.022691519Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716215039022669331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9741db6a-3216-4097-8962-5b46836556a0 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:23:59 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:59.023298637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff395f93-a1d5-4a30-b50e-73dc5ad48b31 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:23:59 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:59.023362234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff395f93-a1d5-4a30-b50e-73dc5ad48b31 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:23:59 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:59.023514794Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1880b8b0412d841dce355a4e11a153937110621726b0d566e17e0f531a89053e,PodSandboxId:add1a51a0b99e161f1cd1465e9e6d79ef7e12a8da4cc35b2693afb0196eb411d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716215032139647060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6cfbdd5dced9c86779bd997ff5a13d,},Annotations:map[string]string{io.kubernetes.container.hash: 98fa159,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fabfcf38760ba1ec46041b938a949787975d0c12519f3e5f3190978d5d952fa,PodSandboxId:a20ae6aa0e3ea349f27952b284e851cdb2b562931275c284796000fe06342efb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716215032170332440,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b13e3994210c45dc653c312b9c3d77c6,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f40bfcf56bc8a78fe229b17beaf9c21705c081a874930114c43e9f8d144a241f,PodSandboxId:eea20a169f42a47b4f5a57276342ea4b3457826e245ad655e321b9316e09f64c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716215032086024311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c6071bc1bd5a875e6f04e528de940b6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b11a0755817d1fd2aa02f3007796a12ca9f1e8660c7210e102d50603de6a48,PodSandboxId:df785746414239d37f6a0af28f7d05a0031fb89849546429d0bbef0c8820dc37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716214936573724069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c6071bc1bd5a875e6f04e528de940b6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf9c906614aa18155b4374fd7b9307efc626db0394dff13da1025f9c8cd1658,PodSandboxId:c1a594941c27ee6a0028681193881d25b10d34762e5f5296ebf142724d5c0cdb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716214936524459702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b13e3994210c45dc653c312b9c3d77c6,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51ec6f170f18edf00453204cd488fdd7b72a96978a7435d3b06cf61199767a9,PodSandboxId:4c34637aae4b61cbc1810cc1ca278214783acda6eb06981330389819712a0b49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716214936418620001,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6cfbdd5dced9c86779bd997ff5a13d,},Annotations:map[string]string{io.kubernetes.container.hash: 98fa159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8adb1ee817338e647f4b83fa8894062398467e838e30bd5e50bebac0084d14a4,PodSandboxId:10d60f77ed1405f3976e003ce1801309c4fb6407961bba9f2ee81b67a16588b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716214921712941097,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd1e6da689a7cf27065fb5956e3f8ea,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff395f93-a1d5-4a30-b50e-73dc5ad48b31 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:23:59 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:59.062891208Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c970dc6c-3219-4bf3-b938-cf3e39d338b9 name=/runtime.v1.RuntimeService/Version
	May 20 14:23:59 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:59.063025633Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c970dc6c-3219-4bf3-b938-cf3e39d338b9 name=/runtime.v1.RuntimeService/Version
	May 20 14:23:59 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:59.064546210Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5275e50-bcaa-49dc-ba7b-462cf7d64112 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:23:59 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:59.065234152Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716215039065164596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5275e50-bcaa-49dc-ba7b-462cf7d64112 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:23:59 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:59.065860844Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56631f56-45e3-4e9d-a3c4-2d0806018a2d name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:23:59 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:59.065934499Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56631f56-45e3-4e9d-a3c4-2d0806018a2d name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:23:59 kubernetes-upgrade-366203 crio[1915]: time="2024-05-20 14:23:59.066129515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1880b8b0412d841dce355a4e11a153937110621726b0d566e17e0f531a89053e,PodSandboxId:add1a51a0b99e161f1cd1465e9e6d79ef7e12a8da4cc35b2693afb0196eb411d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716215032139647060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6cfbdd5dced9c86779bd997ff5a13d,},Annotations:map[string]string{io.kubernetes.container.hash: 98fa159,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fabfcf38760ba1ec46041b938a949787975d0c12519f3e5f3190978d5d952fa,PodSandboxId:a20ae6aa0e3ea349f27952b284e851cdb2b562931275c284796000fe06342efb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716215032170332440,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b13e3994210c45dc653c312b9c3d77c6,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f40bfcf56bc8a78fe229b17beaf9c21705c081a874930114c43e9f8d144a241f,PodSandboxId:eea20a169f42a47b4f5a57276342ea4b3457826e245ad655e321b9316e09f64c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716215032086024311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c6071bc1bd5a875e6f04e528de940b6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b11a0755817d1fd2aa02f3007796a12ca9f1e8660c7210e102d50603de6a48,PodSandboxId:df785746414239d37f6a0af28f7d05a0031fb89849546429d0bbef0c8820dc37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716214936573724069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c6071bc1bd5a875e6f04e528de940b6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf9c906614aa18155b4374fd7b9307efc626db0394dff13da1025f9c8cd1658,PodSandboxId:c1a594941c27ee6a0028681193881d25b10d34762e5f5296ebf142724d5c0cdb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716214936524459702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b13e3994210c45dc653c312b9c3d77c6,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51ec6f170f18edf00453204cd488fdd7b72a96978a7435d3b06cf61199767a9,PodSandboxId:4c34637aae4b61cbc1810cc1ca278214783acda6eb06981330389819712a0b49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716214936418620001,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6cfbdd5dced9c86779bd997ff5a13d,},Annotations:map[string]string{io.kubernetes.container.hash: 98fa159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8adb1ee817338e647f4b83fa8894062398467e838e30bd5e50bebac0084d14a4,PodSandboxId:10d60f77ed1405f3976e003ce1801309c4fb6407961bba9f2ee81b67a16588b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716214921712941097,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-366203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd1e6da689a7cf27065fb5956e3f8ea,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56631f56-45e3-4e9d-a3c4-2d0806018a2d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0fabfcf38760b       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   6 seconds ago        Running             kube-apiserver            2                   a20ae6aa0e3ea       kube-apiserver-kubernetes-upgrade-366203
	1880b8b0412d8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   7 seconds ago        Running             etcd                      2                   add1a51a0b99e       etcd-kubernetes-upgrade-366203
	f40bfcf56bc8a       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   7 seconds ago        Running             kube-controller-manager   2                   eea20a169f42a       kube-controller-manager-kubernetes-upgrade-366203
	87b11a0755817       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   About a minute ago   Exited              kube-controller-manager   1                   df78574641423       kube-controller-manager-kubernetes-upgrade-366203
	bcf9c906614aa       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   About a minute ago   Exited              kube-apiserver            1                   c1a594941c27e       kube-apiserver-kubernetes-upgrade-366203
	c51ec6f170f18       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   About a minute ago   Exited              etcd                      1                   4c34637aae4b6       etcd-kubernetes-upgrade-366203
	8adb1ee817338       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   About a minute ago   Exited              kube-scheduler            0                   10d60f77ed140       kube-scheduler-kubernetes-upgrade-366203
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-366203
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-366203
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 14:22:04 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-366203
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 14:23:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 14:23:55 +0000   Mon, 20 May 2024 14:22:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 14:23:55 +0000   Mon, 20 May 2024 14:22:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 14:23:55 +0000   Mon, 20 May 2024 14:22:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 14:23:55 +0000   Mon, 20 May 2024 14:22:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    kubernetes-upgrade-366203
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ddf5dedc9d554938887143e5b5a4b98d
	  System UUID:                ddf5dedc-9d55-4938-8871-43e5b5a4b98d
	  Boot ID:                    ec1d700c-567e-467a-9615-df500582e4ef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-366203                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         105s
	  kube-system                 kube-apiserver-kubernetes-upgrade-366203             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-366203    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 kube-scheduler-kubernetes-upgrade-366203             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From     Message
	  ----    ------                   ----                 ----     -------
	  Normal  Starting                 118s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet  Node kubernetes-upgrade-366203 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet  Node kubernetes-upgrade-366203 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x7 over 118s)  kubelet  Node kubernetes-upgrade-366203 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  118s                 kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 8s                   kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)      kubelet  Node kubernetes-upgrade-366203 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)      kubelet  Node kubernetes-upgrade-366203 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)      kubelet  Node kubernetes-upgrade-366203 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                   kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +1.995876] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.661313] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.156451] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.065212] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077302] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.199083] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.141550] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.287555] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +4.282732] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +0.074901] kauditd_printk_skb: 130 callbacks suppressed
	[May20 14:22] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[ +12.115623] systemd-fstab-generator[1250]: Ignoring "noauto" option for root device
	[  +0.080019] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.627183] systemd-fstab-generator[1705]: Ignoring "noauto" option for root device
	[  +0.196918] systemd-fstab-generator[1762]: Ignoring "noauto" option for root device
	[  +0.243838] systemd-fstab-generator[1791]: Ignoring "noauto" option for root device
	[  +0.205969] systemd-fstab-generator[1803]: Ignoring "noauto" option for root device
	[  +0.341267] systemd-fstab-generator[1832]: Ignoring "noauto" option for root device
	[May20 14:23] systemd-fstab-generator[1996]: Ignoring "noauto" option for root device
	[  +0.081268] kauditd_printk_skb: 167 callbacks suppressed
	[  +2.620926] systemd-fstab-generator[2121]: Ignoring "noauto" option for root device
	[  +5.935001] systemd-fstab-generator[2443]: Ignoring "noauto" option for root device
	[  +0.099080] kauditd_printk_skb: 70 callbacks suppressed
	
	
	==> etcd [1880b8b0412d841dce355a4e11a153937110621726b0d566e17e0f531a89053e] <==
	{"level":"info","ts":"2024-05-20T14:23:52.541267Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T14:23:52.541345Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T14:23:52.541722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 switched to configuration voters=(11623670073473264757)"}
	{"level":"info","ts":"2024-05-20T14:23:52.541828Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","added-peer-id":"a14f9258d3b66c75","added-peer-peer-urls":["https://192.168.39.196:2380"]}
	{"level":"info","ts":"2024-05-20T14:23:52.542007Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T14:23:52.542133Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T14:23:52.550113Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T14:23:52.556594Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-05-20T14:23:52.556795Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-05-20T14:23:52.563568Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T14:23:52.56351Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a14f9258d3b66c75","initial-advertise-peer-urls":["https://192.168.39.196:2380"],"listen-peer-urls":["https://192.168.39.196:2380"],"advertise-client-urls":["https://192.168.39.196:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.196:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T14:23:53.473481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T14:23:53.473616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T14:23:53.473657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 received MsgPreVoteResp from a14f9258d3b66c75 at term 2"}
	{"level":"info","ts":"2024-05-20T14:23:53.473687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T14:23:53.473719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 received MsgVoteResp from a14f9258d3b66c75 at term 3"}
	{"level":"info","ts":"2024-05-20T14:23:53.47375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became leader at term 3"}
	{"level":"info","ts":"2024-05-20T14:23:53.473778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a14f9258d3b66c75 elected leader a14f9258d3b66c75 at term 3"}
	{"level":"info","ts":"2024-05-20T14:23:53.483446Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a14f9258d3b66c75","local-member-attributes":"{Name:kubernetes-upgrade-366203 ClientURLs:[https://192.168.39.196:2379]}","request-path":"/0/members/a14f9258d3b66c75/attributes","cluster-id":"8309c60c27e527a4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T14:23:53.483735Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T14:23:53.483773Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T14:23:53.483809Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T14:23:53.483869Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T14:23:53.485915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T14:23:53.485922Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.196:2379"}
	
	
	==> etcd [c51ec6f170f18edf00453204cd488fdd7b72a96978a7435d3b06cf61199767a9] <==
	{"level":"info","ts":"2024-05-20T14:22:16.871399Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"43.745937ms"}
	{"level":"info","ts":"2024-05-20T14:22:16.890075Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-05-20T14:22:16.934489Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","commit-index":308}
	{"level":"info","ts":"2024-05-20T14:22:16.947579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 switched to configuration voters=()"}
	{"level":"info","ts":"2024-05-20T14:22:16.947721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became follower at term 2"}
	{"level":"info","ts":"2024-05-20T14:22:16.947804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft a14f9258d3b66c75 [peers: [], term: 2, commit: 308, applied: 0, lastindex: 308, lastterm: 2]"}
	{"level":"warn","ts":"2024-05-20T14:22:16.961379Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-05-20T14:22:16.980036Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":303}
	{"level":"info","ts":"2024-05-20T14:22:16.988845Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-05-20T14:22:17.003085Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"a14f9258d3b66c75","timeout":"7s"}
	{"level":"info","ts":"2024-05-20T14:22:17.012726Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"a14f9258d3b66c75"}
	{"level":"info","ts":"2024-05-20T14:22:17.012896Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"a14f9258d3b66c75","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-05-20T14:22:17.014322Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-05-20T14:22:17.015994Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T14:22:17.019777Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T14:22:17.019808Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T14:22:17.020485Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T14:22:17.020672Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a14f9258d3b66c75","initial-advertise-peer-urls":["https://192.168.39.196:2380"],"listen-peer-urls":["https://192.168.39.196:2380"],"advertise-client-urls":["https://192.168.39.196:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.196:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T14:22:17.020736Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T14:22:17.019703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 switched to configuration voters=(11623670073473264757)"}
	{"level":"info","ts":"2024-05-20T14:22:17.020921Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","added-peer-id":"a14f9258d3b66c75","added-peer-peer-urls":["https://192.168.39.196:2380"]}
	{"level":"info","ts":"2024-05-20T14:22:17.021046Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T14:22:17.021106Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T14:22:17.022311Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-05-20T14:22:17.022373Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.196:2380"}
	
	
	==> kernel <==
	 14:23:59 up 2 min,  0 users,  load average: 0.26, 0.15, 0.06
	Linux kubernetes-upgrade-366203 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0fabfcf38760ba1ec46041b938a949787975d0c12519f3e5f3190978d5d952fa] <==
	I0520 14:23:54.977122       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0520 14:23:54.940790       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0520 14:23:54.940746       1 aggregator.go:163] waiting for initial CRD sync...
	I0520 14:23:55.077155       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 14:23:55.077349       1 aggregator.go:165] initial CRD sync complete...
	I0520 14:23:55.077389       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 14:23:55.077418       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 14:23:55.077680       1 cache.go:39] Caches are synced for autoregister controller
	I0520 14:23:55.139678       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 14:23:55.139978       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 14:23:55.140542       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 14:23:55.140675       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 14:23:55.140715       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 14:23:55.142408       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 14:23:55.142609       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 14:23:55.147244       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 14:23:55.148247       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 14:23:55.148300       1 policy_source.go:224] refreshing policies
	I0520 14:23:55.165487       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 14:23:55.946151       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 14:23:56.768466       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 14:23:56.804647       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 14:23:56.896153       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 14:23:56.961944       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 14:23:56.973135       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [bcf9c906614aa18155b4374fd7b9307efc626db0394dff13da1025f9c8cd1658] <==
	I0520 14:22:16.828981       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0520 14:22:17.922045       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:22:17.922414       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0520 14:22:17.922537       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0520 14:22:17.923997       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 14:22:17.925251       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0520 14:22:17.925277       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0520 14:22:17.925682       1 instance.go:299] Using reconciler: lease
	W0520 14:22:17.932306       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:22:18.923386       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:22:18.923438       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:22:18.933597       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:22:20.658727       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:22:20.689425       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:22:20.836942       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:22:22.985066       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:22:23.063163       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:22:23.713383       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:22:26.461296       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:22:27.317535       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:22:27.401830       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:22:33.083765       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:22:33.401612       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 14:22:33.952416       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0520 14:22:37.927446       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [87b11a0755817d1fd2aa02f3007796a12ca9f1e8660c7210e102d50603de6a48] <==
	
	
	==> kube-controller-manager [f40bfcf56bc8a78fe229b17beaf9c21705c081a874930114c43e9f8d144a241f] <==
	I0520 14:23:57.268815       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0520 14:23:57.268931       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0520 14:23:57.271666       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0520 14:23:57.277031       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0520 14:23:57.277467       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0520 14:23:57.279854       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0520 14:23:57.281389       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0520 14:23:57.281652       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0520 14:23:57.281716       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	E0520 14:23:57.319321       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0520 14:23:57.319422       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	E0520 14:23:57.366833       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0520 14:23:57.366890       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0520 14:23:57.417043       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0520 14:23:57.417161       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0520 14:23:57.417175       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0520 14:23:57.467154       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0520 14:23:57.467293       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0520 14:23:57.467308       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0520 14:23:57.517002       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0520 14:23:57.517047       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0520 14:23:57.566726       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0520 14:23:57.566786       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0520 14:23:57.566795       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0520 14:23:57.566802       1 shared_informer.go:320] Caches are synced for token_cleaner
	
	
	==> kube-scheduler [8adb1ee817338e647f4b83fa8894062398467e838e30bd5e50bebac0084d14a4] <==
	E0520 14:22:05.574603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 14:22:05.605864       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 14:22:05.606037       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 14:22:05.608469       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 14:22:05.609549       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 14:22:05.650404       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 14:22:05.650445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 14:22:05.654255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 14:22:05.654575       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 14:22:05.777736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 14:22:05.777969       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 14:22:05.863939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 14:22:05.864110       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 14:22:05.879845       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 14:22:05.879968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 14:22:05.922955       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 14:22:05.922986       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 14:22:05.953345       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 14:22:05.953523       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 14:22:06.003502       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 14:22:06.003546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 14:22:06.022002       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 14:22:06.022053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0520 14:22:08.389080       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0520 14:22:15.053852       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 20 14:23:51 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:51.761053    2128 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.196:8443: connect: connection refused" node="kubernetes-upgrade-366203"
	May 20 14:23:51 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:51.919756    2128 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_1\" already exists"
	May 20 14:23:51 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:51.919835    2128 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_1\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-366203"
	May 20 14:23:51 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:51.919862    2128 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_1\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-366203"
	May 20 14:23:51 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:51.919928    2128 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-kubernetes-upgrade-366203_kube-system(7fd1e6da689a7cf27065fb5956e3f8ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-kubernetes-upgrade-366203_kube-system(7fd1e6da689a7cf27065fb5956e3f8ea)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_1\\\" already exists\"" pod="kube-system/kube-scheduler-kubernetes-upgrade-366203" podUID="7fd1e6da689a7cf27065fb5956e3f8ea"
	May 20 14:23:52 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:52.069079    2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-366203?timeout=10s\": dial tcp 192.168.39.196:8443: connect: connection refused" interval="800ms"
	May 20 14:23:52 kubernetes-upgrade-366203 kubelet[2128]: I0520 14:23:52.162409    2128 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-366203"
	May 20 14:23:52 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:52.165571    2128 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.196:8443: connect: connection refused" node="kubernetes-upgrade-366203"
	May 20 14:23:52 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:52.576532    2128 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_1\" already exists"
	May 20 14:23:52 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:52.576614    2128 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_1\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-366203"
	May 20 14:23:52 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:52.576647    2128 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_1\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-366203"
	May 20 14:23:52 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:52.576711    2128 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-kubernetes-upgrade-366203_kube-system(7fd1e6da689a7cf27065fb5956e3f8ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-kubernetes-upgrade-366203_kube-system(7fd1e6da689a7cf27065fb5956e3f8ea)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_1\\\" already exists\"" pod="kube-system/kube-scheduler-kubernetes-upgrade-366203" podUID="7fd1e6da689a7cf27065fb5956e3f8ea"
	May 20 14:23:52 kubernetes-upgrade-366203 kubelet[2128]: I0520 14:23:52.967732    2128 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-366203"
	May 20 14:23:55 kubernetes-upgrade-366203 kubelet[2128]: I0520 14:23:55.200043    2128 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-366203"
	May 20 14:23:55 kubernetes-upgrade-366203 kubelet[2128]: I0520 14:23:55.200155    2128 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-366203"
	May 20 14:23:55 kubernetes-upgrade-366203 kubelet[2128]: I0520 14:23:55.451374    2128 apiserver.go:52] "Watching apiserver"
	May 20 14:23:55 kubernetes-upgrade-366203 kubelet[2128]: I0520 14:23:55.459306    2128 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 20 14:23:55 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:55.463682    2128 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_1\" already exists"
	May 20 14:23:55 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:55.463989    2128 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_1\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-366203"
	May 20 14:23:55 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:55.464144    2128 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_1\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-366203"
	May 20 14:23:55 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:55.464399    2128 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-kubernetes-upgrade-366203_kube-system(7fd1e6da689a7cf27065fb5956e3f8ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-kubernetes-upgrade-366203_kube-system(7fd1e6da689a7cf27065fb5956e3f8ea)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_1\\\" already exists\"" pod="kube-system/kube-scheduler-kubernetes-upgrade-366203" podUID="7fd1e6da689a7cf27065fb5956e3f8ea"
	May 20 14:23:55 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:55.955764    2128 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_1\" already exists"
	May 20 14:23:55 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:55.955829    2128 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_1\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-366203"
	May 20 14:23:55 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:55.955856    2128 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_1\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-366203"
	May 20 14:23:55 kubernetes-upgrade-366203 kubelet[2128]: E0520 14:23:55.955941    2128 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-kubernetes-upgrade-366203_kube-system(7fd1e6da689a7cf27065fb5956e3f8ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-kubernetes-upgrade-366203_kube-system(7fd1e6da689a7cf27065fb5956e3f8ea)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-kubernetes-upgrade-366203_kube-system_7fd1e6da689a7cf27065fb5956e3f8ea_1\\\" already exists\"" pod="kube-system/kube-scheduler-kubernetes-upgrade-366203" podUID="7fd1e6da689a7cf27065fb5956e3f8ea"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-366203 -n kubernetes-upgrade-366203
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-366203 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-366203 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-366203 describe pod storage-provisioner: exit status 1 (77.956243ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-366203 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-366203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-366203
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-366203: (1.78117355s)
--- FAIL: TestKubernetesUpgrade (453.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (48.22s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-462644 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0520 14:21:42.806830  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 14:21:59.760993  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-462644 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.064404552s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-462644] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18929
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-462644" primary control-plane node in "pause-462644" cluster
	* Updating the running kvm2 "pause-462644" VM ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-462644" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 14:21:42.254047  655980 out.go:291] Setting OutFile to fd 1 ...
	I0520 14:21:42.254322  655980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 14:21:42.254332  655980 out.go:304] Setting ErrFile to fd 2...
	I0520 14:21:42.254336  655980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 14:21:42.254505  655980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 14:21:42.255135  655980 out.go:298] Setting JSON to false
	I0520 14:21:42.256272  655980 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14642,"bootTime":1716200260,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 14:21:42.256344  655980 start.go:139] virtualization: kvm guest
	I0520 14:21:42.259555  655980 out.go:177] * [pause-462644] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 14:21:42.262016  655980 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 14:21:42.261965  655980 notify.go:220] Checking for updates...
	I0520 14:21:42.264388  655980 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 14:21:42.266769  655980 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 14:21:42.269019  655980 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 14:21:42.271267  655980 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 14:21:42.273534  655980 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 14:21:42.276097  655980 config.go:182] Loaded profile config "pause-462644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:21:42.276607  655980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:21:42.276672  655980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:21:42.295536  655980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39573
	I0520 14:21:42.296110  655980 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:21:42.296784  655980 main.go:141] libmachine: Using API Version  1
	I0520 14:21:42.296806  655980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:21:42.297196  655980 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:21:42.297468  655980 main.go:141] libmachine: (pause-462644) Calling .DriverName
	I0520 14:21:42.297777  655980 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 14:21:42.298239  655980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:21:42.298285  655980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:21:42.320918  655980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46665
	I0520 14:21:42.321532  655980 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:21:42.322161  655980 main.go:141] libmachine: Using API Version  1
	I0520 14:21:42.322181  655980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:21:42.322620  655980 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:21:42.322911  655980 main.go:141] libmachine: (pause-462644) Calling .DriverName
	I0520 14:21:42.368388  655980 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 14:21:42.371262  655980 start.go:297] selected driver: kvm2
	I0520 14:21:42.371294  655980 start.go:901] validating driver "kvm2" against &{Name:pause-462644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.1 ClusterName:pause-462644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:21:42.371471  655980 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 14:21:42.371949  655980 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 14:21:42.372052  655980 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 14:21:42.390762  655980 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 14:21:42.391880  655980 cni.go:84] Creating CNI manager for ""
	I0520 14:21:42.391899  655980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 14:21:42.391983  655980 start.go:340] cluster config:
	{Name:pause-462644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:pause-462644 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:21:42.392203  655980 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 14:21:42.394958  655980 out.go:177] * Starting "pause-462644" primary control-plane node in "pause-462644" cluster
	I0520 14:21:42.397206  655980 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 14:21:42.397284  655980 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 14:21:42.397299  655980 cache.go:56] Caching tarball of preloaded images
	I0520 14:21:42.397408  655980 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 14:21:42.397421  655980 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 14:21:42.397547  655980 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/pause-462644/config.json ...
	I0520 14:21:42.397779  655980 start.go:360] acquireMachinesLock for pause-462644: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 14:21:52.482704  655980 start.go:364] duration metric: took 10.084873819s to acquireMachinesLock for "pause-462644"
	I0520 14:21:52.482775  655980 start.go:96] Skipping create...Using existing machine configuration
	I0520 14:21:52.482784  655980 fix.go:54] fixHost starting: 
	I0520 14:21:52.483218  655980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:21:52.483255  655980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:21:52.503243  655980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44951
	I0520 14:21:52.503879  655980 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:21:52.504414  655980 main.go:141] libmachine: Using API Version  1
	I0520 14:21:52.504442  655980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:21:52.504821  655980 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:21:52.505048  655980 main.go:141] libmachine: (pause-462644) Calling .DriverName
	I0520 14:21:52.505268  655980 main.go:141] libmachine: (pause-462644) Calling .GetState
	I0520 14:21:52.507194  655980 fix.go:112] recreateIfNeeded on pause-462644: state=Running err=<nil>
	W0520 14:21:52.507217  655980 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 14:21:52.509626  655980 out.go:177] * Updating the running kvm2 "pause-462644" VM ...
	I0520 14:21:52.511777  655980 machine.go:94] provisionDockerMachine start ...
	I0520 14:21:52.511802  655980 main.go:141] libmachine: (pause-462644) Calling .DriverName
	I0520 14:21:52.512043  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHHostname
	I0520 14:21:52.515283  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:52.515886  655980 main.go:141] libmachine: (pause-462644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:4b:be", ip: ""} in network mk-pause-462644: {Iface:virbr4 ExpiryTime:2024-05-20 15:20:45 +0000 UTC Type:0 Mac:52:54:00:3e:4b:be Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:pause-462644 Clientid:01:52:54:00:3e:4b:be}
	I0520 14:21:52.515924  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined IP address 192.168.50.77 and MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:52.516090  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHPort
	I0520 14:21:52.516323  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHKeyPath
	I0520 14:21:52.516576  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHKeyPath
	I0520 14:21:52.516730  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHUsername
	I0520 14:21:52.516937  655980 main.go:141] libmachine: Using SSH client type: native
	I0520 14:21:52.517176  655980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0520 14:21:52.517191  655980 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 14:21:52.634125  655980 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-462644
	
	I0520 14:21:52.634164  655980 main.go:141] libmachine: (pause-462644) Calling .GetMachineName
	I0520 14:21:52.634481  655980 buildroot.go:166] provisioning hostname "pause-462644"
	I0520 14:21:52.634514  655980 main.go:141] libmachine: (pause-462644) Calling .GetMachineName
	I0520 14:21:52.634700  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHHostname
	I0520 14:21:52.637646  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:52.638028  655980 main.go:141] libmachine: (pause-462644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:4b:be", ip: ""} in network mk-pause-462644: {Iface:virbr4 ExpiryTime:2024-05-20 15:20:45 +0000 UTC Type:0 Mac:52:54:00:3e:4b:be Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:pause-462644 Clientid:01:52:54:00:3e:4b:be}
	I0520 14:21:52.638046  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined IP address 192.168.50.77 and MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:52.638254  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHPort
	I0520 14:21:52.638562  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHKeyPath
	I0520 14:21:52.638757  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHKeyPath
	I0520 14:21:52.638941  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHUsername
	I0520 14:21:52.639113  655980 main.go:141] libmachine: Using SSH client type: native
	I0520 14:21:52.639283  655980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0520 14:21:52.639296  655980 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-462644 && echo "pause-462644" | sudo tee /etc/hostname
	I0520 14:21:52.767087  655980 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-462644
	
	I0520 14:21:52.767123  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHHostname
	I0520 14:21:52.770822  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:52.771221  655980 main.go:141] libmachine: (pause-462644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:4b:be", ip: ""} in network mk-pause-462644: {Iface:virbr4 ExpiryTime:2024-05-20 15:20:45 +0000 UTC Type:0 Mac:52:54:00:3e:4b:be Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:pause-462644 Clientid:01:52:54:00:3e:4b:be}
	I0520 14:21:52.771256  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined IP address 192.168.50.77 and MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:52.771489  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHPort
	I0520 14:21:52.771736  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHKeyPath
	I0520 14:21:52.771891  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHKeyPath
	I0520 14:21:52.772003  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHUsername
	I0520 14:21:52.772155  655980 main.go:141] libmachine: Using SSH client type: native
	I0520 14:21:52.772338  655980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0520 14:21:52.772383  655980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-462644' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-462644/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-462644' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 14:21:52.892059  655980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 14:21:52.892115  655980 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 14:21:52.892137  655980 buildroot.go:174] setting up certificates
	I0520 14:21:52.892151  655980 provision.go:84] configureAuth start
	I0520 14:21:52.892160  655980 main.go:141] libmachine: (pause-462644) Calling .GetMachineName
	I0520 14:21:52.892589  655980 main.go:141] libmachine: (pause-462644) Calling .GetIP
	I0520 14:21:52.896479  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:52.896962  655980 main.go:141] libmachine: (pause-462644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:4b:be", ip: ""} in network mk-pause-462644: {Iface:virbr4 ExpiryTime:2024-05-20 15:20:45 +0000 UTC Type:0 Mac:52:54:00:3e:4b:be Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:pause-462644 Clientid:01:52:54:00:3e:4b:be}
	I0520 14:21:52.896993  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined IP address 192.168.50.77 and MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:52.897282  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHHostname
	I0520 14:21:52.900567  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:52.901060  655980 main.go:141] libmachine: (pause-462644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:4b:be", ip: ""} in network mk-pause-462644: {Iface:virbr4 ExpiryTime:2024-05-20 15:20:45 +0000 UTC Type:0 Mac:52:54:00:3e:4b:be Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:pause-462644 Clientid:01:52:54:00:3e:4b:be}
	I0520 14:21:52.901089  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined IP address 192.168.50.77 and MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:52.901323  655980 provision.go:143] copyHostCerts
	I0520 14:21:52.901392  655980 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 14:21:52.901418  655980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 14:21:52.901478  655980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 14:21:52.901595  655980 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 14:21:52.901609  655980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 14:21:52.901646  655980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 14:21:52.901763  655980 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 14:21:52.901844  655980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 14:21:52.901897  655980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 14:21:52.902017  655980 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.pause-462644 san=[127.0.0.1 192.168.50.77 localhost minikube pause-462644]
	I0520 14:21:53.052184  655980 provision.go:177] copyRemoteCerts
	I0520 14:21:53.052246  655980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 14:21:53.052273  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHHostname
	I0520 14:21:53.055064  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:53.055363  655980 main.go:141] libmachine: (pause-462644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:4b:be", ip: ""} in network mk-pause-462644: {Iface:virbr4 ExpiryTime:2024-05-20 15:20:45 +0000 UTC Type:0 Mac:52:54:00:3e:4b:be Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:pause-462644 Clientid:01:52:54:00:3e:4b:be}
	I0520 14:21:53.055393  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined IP address 192.168.50.77 and MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:53.055524  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHPort
	I0520 14:21:53.055725  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHKeyPath
	I0520 14:21:53.055917  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHUsername
	I0520 14:21:53.056113  655980 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/pause-462644/id_rsa Username:docker}
	I0520 14:21:53.153314  655980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 14:21:53.183370  655980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0520 14:21:53.213799  655980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 14:21:53.244741  655980 provision.go:87] duration metric: took 352.572619ms to configureAuth
	I0520 14:21:53.244777  655980 buildroot.go:189] setting minikube options for container-runtime
	I0520 14:21:53.245080  655980 config.go:182] Loaded profile config "pause-462644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:21:53.245205  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHHostname
	I0520 14:21:53.248181  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:53.248631  655980 main.go:141] libmachine: (pause-462644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:4b:be", ip: ""} in network mk-pause-462644: {Iface:virbr4 ExpiryTime:2024-05-20 15:20:45 +0000 UTC Type:0 Mac:52:54:00:3e:4b:be Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:pause-462644 Clientid:01:52:54:00:3e:4b:be}
	I0520 14:21:53.248663  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined IP address 192.168.50.77 and MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:53.248908  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHPort
	I0520 14:21:53.249153  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHKeyPath
	I0520 14:21:53.249380  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHKeyPath
	I0520 14:21:53.249557  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHUsername
	I0520 14:21:53.249778  655980 main.go:141] libmachine: Using SSH client type: native
	I0520 14:21:53.249974  655980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0520 14:21:53.249991  655980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 14:21:59.183616  655980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 14:21:59.183653  655980 machine.go:97] duration metric: took 6.671857988s to provisionDockerMachine
	I0520 14:21:59.183664  655980 start.go:293] postStartSetup for "pause-462644" (driver="kvm2")
	I0520 14:21:59.183674  655980 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 14:21:59.183692  655980 main.go:141] libmachine: (pause-462644) Calling .DriverName
	I0520 14:21:59.184184  655980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 14:21:59.184218  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHHostname
	I0520 14:21:59.187283  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:59.187692  655980 main.go:141] libmachine: (pause-462644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:4b:be", ip: ""} in network mk-pause-462644: {Iface:virbr4 ExpiryTime:2024-05-20 15:20:45 +0000 UTC Type:0 Mac:52:54:00:3e:4b:be Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:pause-462644 Clientid:01:52:54:00:3e:4b:be}
	I0520 14:21:59.187720  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined IP address 192.168.50.77 and MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:59.187890  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHPort
	I0520 14:21:59.188106  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHKeyPath
	I0520 14:21:59.188297  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHUsername
	I0520 14:21:59.188473  655980 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/pause-462644/id_rsa Username:docker}
	I0520 14:21:59.274478  655980 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 14:21:59.278549  655980 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 14:21:59.278574  655980 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 14:21:59.278635  655980 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 14:21:59.278703  655980 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 14:21:59.278787  655980 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 14:21:59.289115  655980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 14:21:59.312499  655980 start.go:296] duration metric: took 128.818438ms for postStartSetup
	I0520 14:21:59.312544  655980 fix.go:56] duration metric: took 6.829760831s for fixHost
	I0520 14:21:59.312569  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHHostname
	I0520 14:21:59.315408  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:59.315798  655980 main.go:141] libmachine: (pause-462644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:4b:be", ip: ""} in network mk-pause-462644: {Iface:virbr4 ExpiryTime:2024-05-20 15:20:45 +0000 UTC Type:0 Mac:52:54:00:3e:4b:be Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:pause-462644 Clientid:01:52:54:00:3e:4b:be}
	I0520 14:21:59.315830  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined IP address 192.168.50.77 and MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:59.315991  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHPort
	I0520 14:21:59.316212  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHKeyPath
	I0520 14:21:59.316411  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHKeyPath
	I0520 14:21:59.316570  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHUsername
	I0520 14:21:59.316749  655980 main.go:141] libmachine: Using SSH client type: native
	I0520 14:21:59.316962  655980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0520 14:21:59.316974  655980 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 14:21:59.458040  655980 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716214919.451427042
	
	I0520 14:21:59.458063  655980 fix.go:216] guest clock: 1716214919.451427042
	I0520 14:21:59.458071  655980 fix.go:229] Guest: 2024-05-20 14:21:59.451427042 +0000 UTC Remote: 2024-05-20 14:21:59.312547871 +0000 UTC m=+17.098311900 (delta=138.879171ms)
	I0520 14:21:59.458119  655980 fix.go:200] guest clock delta is within tolerance: 138.879171ms
	I0520 14:21:59.458127  655980 start.go:83] releasing machines lock for "pause-462644", held for 6.97537767s
	I0520 14:21:59.458191  655980 main.go:141] libmachine: (pause-462644) Calling .DriverName
	I0520 14:21:59.458473  655980 main.go:141] libmachine: (pause-462644) Calling .GetIP
	I0520 14:21:59.461447  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:59.461868  655980 main.go:141] libmachine: (pause-462644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:4b:be", ip: ""} in network mk-pause-462644: {Iface:virbr4 ExpiryTime:2024-05-20 15:20:45 +0000 UTC Type:0 Mac:52:54:00:3e:4b:be Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:pause-462644 Clientid:01:52:54:00:3e:4b:be}
	I0520 14:21:59.461898  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined IP address 192.168.50.77 and MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:59.462059  655980 main.go:141] libmachine: (pause-462644) Calling .DriverName
	I0520 14:21:59.462682  655980 main.go:141] libmachine: (pause-462644) Calling .DriverName
	I0520 14:21:59.462914  655980 main.go:141] libmachine: (pause-462644) Calling .DriverName
	I0520 14:21:59.463030  655980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 14:21:59.463091  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHHostname
	I0520 14:21:59.463222  655980 ssh_runner.go:195] Run: cat /version.json
	I0520 14:21:59.463249  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHHostname
	I0520 14:21:59.466132  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:59.466498  655980 main.go:141] libmachine: (pause-462644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:4b:be", ip: ""} in network mk-pause-462644: {Iface:virbr4 ExpiryTime:2024-05-20 15:20:45 +0000 UTC Type:0 Mac:52:54:00:3e:4b:be Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:pause-462644 Clientid:01:52:54:00:3e:4b:be}
	I0520 14:21:59.466537  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:59.466563  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined IP address 192.168.50.77 and MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:59.466739  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHPort
	I0520 14:21:59.466933  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHKeyPath
	I0520 14:21:59.467134  655980 main.go:141] libmachine: (pause-462644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:4b:be", ip: ""} in network mk-pause-462644: {Iface:virbr4 ExpiryTime:2024-05-20 15:20:45 +0000 UTC Type:0 Mac:52:54:00:3e:4b:be Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:pause-462644 Clientid:01:52:54:00:3e:4b:be}
	I0520 14:21:59.467153  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined IP address 192.168.50.77 and MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:21:59.467155  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHUsername
	I0520 14:21:59.467345  655980 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/pause-462644/id_rsa Username:docker}
	I0520 14:21:59.467357  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHPort
	I0520 14:21:59.467603  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHKeyPath
	I0520 14:21:59.467767  655980 main.go:141] libmachine: (pause-462644) Calling .GetSSHUsername
	I0520 14:21:59.467957  655980 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/pause-462644/id_rsa Username:docker}
	W0520 14:21:59.586547  655980 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 14:21:59.586663  655980 ssh_runner.go:195] Run: systemctl --version
	I0520 14:21:59.597931  655980 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 14:21:59.755550  655980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 14:21:59.769032  655980 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 14:21:59.769115  655980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 14:21:59.780993  655980 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 14:21:59.781027  655980 start.go:494] detecting cgroup driver to use...
	I0520 14:21:59.781107  655980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 14:21:59.804345  655980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 14:21:59.819895  655980 docker.go:217] disabling cri-docker service (if available) ...
	I0520 14:21:59.819975  655980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 14:21:59.833803  655980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 14:21:59.849414  655980 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 14:22:00.016865  655980 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 14:22:00.267905  655980 docker.go:233] disabling docker service ...
	I0520 14:22:00.268016  655980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 14:22:00.305535  655980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 14:22:00.324841  655980 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 14:22:00.543978  655980 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 14:22:00.716729  655980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 14:22:00.736693  655980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 14:22:00.756939  655980 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 14:22:00.757039  655980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:00.768876  655980 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 14:22:00.768951  655980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:00.779213  655980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:00.789887  655980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:00.801212  655980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 14:22:00.812071  655980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:00.822429  655980 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:00.836661  655980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:00.848653  655980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 14:22:00.859565  655980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 14:22:00.869699  655980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:22:01.067179  655980 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 14:22:01.777552  655980 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 14:22:01.777627  655980 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 14:22:01.782552  655980 start.go:562] Will wait 60s for crictl version
	I0520 14:22:01.782622  655980 ssh_runner.go:195] Run: which crictl
	I0520 14:22:01.787089  655980 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 14:22:01.834820  655980 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 14:22:01.834929  655980 ssh_runner.go:195] Run: crio --version
	I0520 14:22:01.874552  655980 ssh_runner.go:195] Run: crio --version
	I0520 14:22:01.919362  655980 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 14:22:01.921855  655980 main.go:141] libmachine: (pause-462644) Calling .GetIP
	I0520 14:22:01.925359  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:22:01.925803  655980 main.go:141] libmachine: (pause-462644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:4b:be", ip: ""} in network mk-pause-462644: {Iface:virbr4 ExpiryTime:2024-05-20 15:20:45 +0000 UTC Type:0 Mac:52:54:00:3e:4b:be Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:pause-462644 Clientid:01:52:54:00:3e:4b:be}
	I0520 14:22:01.925830  655980 main.go:141] libmachine: (pause-462644) DBG | domain pause-462644 has defined IP address 192.168.50.77 and MAC address 52:54:00:3e:4b:be in network mk-pause-462644
	I0520 14:22:01.926082  655980 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0520 14:22:01.930909  655980 kubeadm.go:877] updating cluster {Name:pause-462644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:pause-462644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 14:22:01.931084  655980 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 14:22:01.931151  655980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 14:22:01.989699  655980 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 14:22:01.989727  655980 crio.go:433] Images already preloaded, skipping extraction
	I0520 14:22:01.989793  655980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 14:22:02.026438  655980 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 14:22:02.026474  655980 cache_images.go:84] Images are preloaded, skipping loading
	I0520 14:22:02.026486  655980 kubeadm.go:928] updating node { 192.168.50.77 8443 v1.30.1 crio true true} ...
	I0520 14:22:02.026631  655980 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-462644 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:pause-462644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 14:22:02.026728  655980 ssh_runner.go:195] Run: crio config
	I0520 14:22:02.096885  655980 cni.go:84] Creating CNI manager for ""
	I0520 14:22:02.096909  655980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 14:22:02.096954  655980 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 14:22:02.096989  655980 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.77 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-462644 NodeName:pause-462644 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 14:22:02.097145  655980 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-462644"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 14:22:02.097223  655980 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 14:22:02.108273  655980 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 14:22:02.108353  655980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 14:22:02.118615  655980 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0520 14:22:02.139305  655980 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 14:22:02.160360  655980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0520 14:22:02.180274  655980 ssh_runner.go:195] Run: grep 192.168.50.77	control-plane.minikube.internal$ /etc/hosts
	I0520 14:22:02.184275  655980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:22:02.336931  655980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 14:22:02.353825  655980 certs.go:68] Setting up /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/pause-462644 for IP: 192.168.50.77
	I0520 14:22:02.353860  655980 certs.go:194] generating shared ca certs ...
	I0520 14:22:02.353883  655980 certs.go:226] acquiring lock for ca certs: {Name:mk53e9c833a3f951559df8d30971a14829d3f666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:22:02.354086  655980 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key
	I0520 14:22:02.354152  655980 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key
	I0520 14:22:02.354165  655980 certs.go:256] generating profile certs ...
	I0520 14:22:02.354275  655980 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/pause-462644/client.key
	I0520 14:22:02.354348  655980 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/pause-462644/apiserver.key.58692347
	I0520 14:22:02.354400  655980 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/pause-462644/proxy-client.key
	I0520 14:22:02.354540  655980 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem (1338 bytes)
	W0520 14:22:02.354575  655980 certs.go:480] ignoring /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867_empty.pem, impossibly tiny 0 bytes
	I0520 14:22:02.354588  655980 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 14:22:02.354620  655980 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem (1082 bytes)
	I0520 14:22:02.354653  655980 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem (1123 bytes)
	I0520 14:22:02.354678  655980 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem (1679 bytes)
	I0520 14:22:02.354730  655980 certs.go:484] found cert: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 14:22:02.355547  655980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 14:22:02.381074  655980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 14:22:02.406659  655980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 14:22:02.431513  655980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 14:22:02.583260  655980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/pause-462644/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0520 14:22:02.767720  655980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/pause-462644/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 14:22:02.874063  655980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/pause-462644/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 14:22:02.918805  655980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/pause-462644/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 14:22:02.960526  655980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/609867.pem --> /usr/share/ca-certificates/609867.pem (1338 bytes)
	I0520 14:22:02.995025  655980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /usr/share/ca-certificates/6098672.pem (1708 bytes)
	I0520 14:22:03.040651  655980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 14:22:03.071131  655980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 14:22:03.090820  655980 ssh_runner.go:195] Run: openssl version
	I0520 14:22:03.100564  655980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/609867.pem && ln -fs /usr/share/ca-certificates/609867.pem /etc/ssl/certs/609867.pem"
	I0520 14:22:03.118115  655980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/609867.pem
	I0520 14:22:03.127657  655980 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 13:07 /usr/share/ca-certificates/609867.pem
	I0520 14:22:03.127744  655980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/609867.pem
	I0520 14:22:03.134896  655980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/609867.pem /etc/ssl/certs/51391683.0"
	I0520 14:22:03.146737  655980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6098672.pem && ln -fs /usr/share/ca-certificates/6098672.pem /etc/ssl/certs/6098672.pem"
	I0520 14:22:03.160405  655980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6098672.pem
	I0520 14:22:03.166287  655980 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 13:07 /usr/share/ca-certificates/6098672.pem
	I0520 14:22:03.166352  655980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6098672.pem
	I0520 14:22:03.172015  655980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6098672.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 14:22:03.182566  655980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 14:22:03.193348  655980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:22:03.197896  655980 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 12:55 /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:22:03.197964  655980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 14:22:03.203615  655980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 14:22:03.212674  655980 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 14:22:03.217185  655980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 14:22:03.222750  655980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 14:22:03.228701  655980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 14:22:03.234408  655980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 14:22:03.239819  655980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 14:22:03.245170  655980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 14:22:03.250756  655980 kubeadm.go:391] StartCluster: {Name:pause-462644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:pause-462644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:22:03.250897  655980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 14:22:03.250988  655980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 14:22:03.290871  655980 cri.go:89] found id: "c3d5098f42101b5e1aa1bf3e04549d67a79b2f9f4576b0b8300c7959485b0a45"
	I0520 14:22:03.290899  655980 cri.go:89] found id: "3f7dd00fffc660b262d5e304aee397db22bf4ab48951c778a11c1f238e209ac4"
	I0520 14:22:03.290908  655980 cri.go:89] found id: "4a9b29b2f9ad29ab12a236311ac9c866b337cdc4a2d02ab40551cb79498d5aae"
	I0520 14:22:03.290913  655980 cri.go:89] found id: "3083106faea64ffdba84f12ca33edeab1ad16d4a665d2417c40cb79a050666c9"
	I0520 14:22:03.290916  655980 cri.go:89] found id: "abe6dbb008f6fc5323e4845861bb374a312f293310cc54fe983484f88333a0d1"
	I0520 14:22:03.290921  655980 cri.go:89] found id: "5095f4a6930eb5613f5aff1ab099b5e94070cb42605a0f83edc2391266da3153"
	I0520 14:22:03.290925  655980 cri.go:89] found id: ""
	I0520 14:22:03.290976  655980 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-462644 -n pause-462644
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-462644 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-462644 logs -n 25: (1.392505684s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-862860 sudo                                | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo cat                            | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo cat                            | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo                                | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo                                | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo                                | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo cat                            | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo cat                            | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo                                | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo                                | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo                                | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo find                           | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo crio                           | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-862860                                     | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC | 20 May 24 14:20 UTC |
	| start   | -p pause-462644 --memory=2048                        | pause-462644              | jenkins | v1.33.1 | 20 May 24 14:20 UTC | 20 May 24 14:21 UTC |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                             |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | cert-options-565318 ssh                              | cert-options-565318       | jenkins | v1.33.1 | 20 May 24 14:20 UTC | 20 May 24 14:20 UTC |
	|         | openssl x509 -text -noout -in                        |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                           |         |         |                     |                     |
	| ssh     | -p cert-options-565318 -- sudo                       | cert-options-565318       | jenkins | v1.33.1 | 20 May 24 14:20 UTC | 20 May 24 14:20 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                           |         |         |                     |                     |
	| delete  | -p cert-options-565318                               | cert-options-565318       | jenkins | v1.33.1 | 20 May 24 14:20 UTC | 20 May 24 14:20 UTC |
	| start   | -p auto-862860 --memory=3072                         | auto-862860               | jenkins | v1.33.1 | 20 May 24 14:20 UTC | 20 May 24 14:22 UTC |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-366203                         | kubernetes-upgrade-366203 | jenkins | v1.33.1 | 20 May 24 14:21 UTC | 20 May 24 14:21 UTC |
	| start   | -p kubernetes-upgrade-366203                         | kubernetes-upgrade-366203 | jenkins | v1.33.1 | 20 May 24 14:21 UTC | 20 May 24 14:22 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-462644                                      | pause-462644              | jenkins | v1.33.1 | 20 May 24 14:21 UTC | 20 May 24 14:22 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p auto-862860 pgrep -a                              | auto-862860               | jenkins | v1.33.1 | 20 May 24 14:22 UTC | 20 May 24 14:22 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-366203                         | kubernetes-upgrade-366203 | jenkins | v1.33.1 | 20 May 24 14:22 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-366203                         | kubernetes-upgrade-366203 | jenkins | v1.33.1 | 20 May 24 14:22 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 14:22:13
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 14:22:13.941220  656347 out.go:291] Setting OutFile to fd 1 ...
	I0520 14:22:13.941503  656347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 14:22:13.941514  656347 out.go:304] Setting ErrFile to fd 2...
	I0520 14:22:13.941521  656347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 14:22:13.941745  656347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 14:22:13.942318  656347 out.go:298] Setting JSON to false
	I0520 14:22:13.943307  656347 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14674,"bootTime":1716200260,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 14:22:13.943376  656347 start.go:139] virtualization: kvm guest
	I0520 14:22:13.946431  656347 out.go:177] * [kubernetes-upgrade-366203] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 14:22:13.948760  656347 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 14:22:13.948766  656347 notify.go:220] Checking for updates...
	I0520 14:22:13.951134  656347 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 14:22:13.953236  656347 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 14:22:13.955335  656347 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 14:22:13.957612  656347 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 14:22:13.959644  656347 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 14:22:13.962159  656347 config.go:182] Loaded profile config "kubernetes-upgrade-366203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:22:13.962673  656347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:22:13.962733  656347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:22:13.979901  656347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36511
	I0520 14:22:13.980381  656347 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:22:13.981062  656347 main.go:141] libmachine: Using API Version  1
	I0520 14:22:13.981091  656347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:22:13.981542  656347 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:22:13.981794  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:13.982100  656347 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 14:22:13.982403  656347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:22:13.982448  656347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:22:13.999756  656347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
	I0520 14:22:14.000275  656347 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:22:14.000835  656347 main.go:141] libmachine: Using API Version  1
	I0520 14:22:14.000859  656347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:22:14.001233  656347 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:22:14.001452  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:14.039896  656347 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 14:22:14.042241  656347 start.go:297] selected driver: kvm2
	I0520 14:22:14.042269  656347 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-366203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-366203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:22:14.042373  656347 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 14:22:14.043140  656347 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 14:22:14.043232  656347 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 14:22:14.061077  656347 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 14:22:14.061647  656347 cni.go:84] Creating CNI manager for ""
	I0520 14:22:14.061674  656347 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 14:22:14.061731  656347 start.go:340] cluster config:
	{Name:kubernetes-upgrade-366203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-366203 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:22:14.061881  656347 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 14:22:14.064914  656347 out.go:177] * Starting "kubernetes-upgrade-366203" primary control-plane node in "kubernetes-upgrade-366203" cluster
	I0520 14:22:14.067877  656347 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 14:22:14.067958  656347 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 14:22:14.067972  656347 cache.go:56] Caching tarball of preloaded images
	I0520 14:22:14.068054  656347 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 14:22:14.068068  656347 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 14:22:14.068185  656347 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/config.json ...
	I0520 14:22:14.068431  656347 start.go:360] acquireMachinesLock for kubernetes-upgrade-366203: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 14:22:14.068487  656347 start.go:364] duration metric: took 32.069µs to acquireMachinesLock for "kubernetes-upgrade-366203"
	I0520 14:22:14.068517  656347 start.go:96] Skipping create...Using existing machine configuration
	I0520 14:22:14.068529  656347 fix.go:54] fixHost starting: 
	I0520 14:22:14.068894  656347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:22:14.068922  656347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:22:14.086332  656347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37451
	I0520 14:22:14.086900  656347 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:22:14.087562  656347 main.go:141] libmachine: Using API Version  1
	I0520 14:22:14.087598  656347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:22:14.087982  656347 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:22:14.088210  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:14.088451  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetState
	I0520 14:22:14.090170  656347 fix.go:112] recreateIfNeeded on kubernetes-upgrade-366203: state=Running err=<nil>
	W0520 14:22:14.090195  656347 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 14:22:14.092966  656347 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-366203" VM ...
	I0520 14:22:12.907015  655980 pod_ready.go:102] pod "etcd-pause-462644" in "kube-system" namespace has status "Ready":"False"
	I0520 14:22:15.402531  655980 pod_ready.go:102] pod "etcd-pause-462644" in "kube-system" namespace has status "Ready":"False"
	I0520 14:22:14.095156  656347 machine.go:94] provisionDockerMachine start ...
	I0520 14:22:14.095190  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:14.095458  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:14.098489  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.098986  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:14.099031  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.099322  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:14.099565  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.099725  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.103394  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:14.105355  656347 main.go:141] libmachine: Using SSH client type: native
	I0520 14:22:14.105578  656347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0520 14:22:14.105586  656347 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 14:22:14.234378  656347 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-366203
	
	I0520 14:22:14.234422  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetMachineName
	I0520 14:22:14.234735  656347 buildroot.go:166] provisioning hostname "kubernetes-upgrade-366203"
	I0520 14:22:14.234781  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetMachineName
	I0520 14:22:14.234981  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:14.238149  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.238665  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:14.238712  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.238831  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:14.239057  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.239232  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.239376  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:14.239567  656347 main.go:141] libmachine: Using SSH client type: native
	I0520 14:22:14.239738  656347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0520 14:22:14.239751  656347 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-366203 && echo "kubernetes-upgrade-366203" | sudo tee /etc/hostname
	I0520 14:22:14.362879  656347 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-366203
	
	I0520 14:22:14.362921  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:14.366183  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.366527  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:14.366558  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.366783  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:14.367006  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.367255  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.367432  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:14.367687  656347 main.go:141] libmachine: Using SSH client type: native
	I0520 14:22:14.367861  656347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0520 14:22:14.367880  656347 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-366203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-366203/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-366203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 14:22:14.478472  656347 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 14:22:14.478520  656347 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 14:22:14.478570  656347 buildroot.go:174] setting up certificates
	I0520 14:22:14.478581  656347 provision.go:84] configureAuth start
	I0520 14:22:14.478595  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetMachineName
	I0520 14:22:14.478867  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetIP
	I0520 14:22:14.481934  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.482423  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:14.482459  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.482707  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:14.485173  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.485560  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:14.485592  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.485955  656347 provision.go:143] copyHostCerts
	I0520 14:22:14.486030  656347 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 14:22:14.486044  656347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 14:22:14.486117  656347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 14:22:14.486237  656347 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 14:22:14.486247  656347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 14:22:14.486278  656347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 14:22:14.486351  656347 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 14:22:14.486361  656347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 14:22:14.486389  656347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 14:22:14.486459  656347 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-366203 san=[127.0.0.1 192.168.39.196 kubernetes-upgrade-366203 localhost minikube]
	I0520 14:22:14.730405  656347 provision.go:177] copyRemoteCerts
	I0520 14:22:14.730484  656347 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 14:22:14.730516  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:14.733650  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.734119  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:14.734152  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.734441  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:14.734733  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.734920  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:14.735079  656347 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa Username:docker}
	I0520 14:22:14.816564  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 14:22:14.845370  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0520 14:22:14.879388  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 14:22:14.910491  656347 provision.go:87] duration metric: took 431.891438ms to configureAuth
	I0520 14:22:14.910531  656347 buildroot.go:189] setting minikube options for container-runtime
	I0520 14:22:14.910715  656347 config.go:182] Loaded profile config "kubernetes-upgrade-366203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:22:14.910794  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:14.913774  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.914159  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:14.914194  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.914329  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:14.914574  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.914786  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.914935  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:14.915092  656347 main.go:141] libmachine: Using SSH client type: native
	I0520 14:22:14.915305  656347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0520 14:22:14.915326  656347 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 14:22:15.840762  656347 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 14:22:15.840800  656347 machine.go:97] duration metric: took 1.745626869s to provisionDockerMachine
	I0520 14:22:15.840815  656347 start.go:293] postStartSetup for "kubernetes-upgrade-366203" (driver="kvm2")
	I0520 14:22:15.840829  656347 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 14:22:15.840854  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:15.841232  656347 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 14:22:15.841285  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:15.843888  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:15.844260  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:15.844291  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:15.844504  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:15.844724  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:15.844921  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:15.845112  656347 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa Username:docker}
	I0520 14:22:15.928299  656347 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 14:22:15.932824  656347 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 14:22:15.932856  656347 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 14:22:15.932938  656347 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 14:22:15.933022  656347 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 14:22:15.933113  656347 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 14:22:15.942657  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 14:22:15.966705  656347 start.go:296] duration metric: took 125.870507ms for postStartSetup
	I0520 14:22:15.966764  656347 fix.go:56] duration metric: took 1.898235402s for fixHost
	I0520 14:22:15.966789  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:15.969884  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:15.970274  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:15.970321  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:15.970496  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:15.970709  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:15.970861  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:15.970953  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:15.971117  656347 main.go:141] libmachine: Using SSH client type: native
	I0520 14:22:15.971289  656347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0520 14:22:15.971301  656347 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 14:22:16.073754  656347 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716214936.067341918
	
	I0520 14:22:16.073781  656347 fix.go:216] guest clock: 1716214936.067341918
	I0520 14:22:16.073790  656347 fix.go:229] Guest: 2024-05-20 14:22:16.067341918 +0000 UTC Remote: 2024-05-20 14:22:15.966769641 +0000 UTC m=+2.065791293 (delta=100.572277ms)
	I0520 14:22:16.073819  656347 fix.go:200] guest clock delta is within tolerance: 100.572277ms
	I0520 14:22:16.073825  656347 start.go:83] releasing machines lock for "kubernetes-upgrade-366203", held for 2.005324672s
	I0520 14:22:16.073852  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:16.074205  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetIP
	I0520 14:22:16.077047  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:16.077383  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:16.077415  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:16.077547  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:16.078070  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:16.078266  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:16.078380  656347 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 14:22:16.078462  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:16.078508  656347 ssh_runner.go:195] Run: cat /version.json
	I0520 14:22:16.078534  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:16.081416  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:16.081644  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:16.081916  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:16.081971  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:16.082005  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:16.082023  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:16.082081  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:16.082298  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:16.082310  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:16.082507  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:16.082527  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:16.082692  656347 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa Username:docker}
	I0520 14:22:16.082769  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:16.082917  656347 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa Username:docker}
	W0520 14:22:16.187881  656347 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 14:22:16.187979  656347 ssh_runner.go:195] Run: systemctl --version
	I0520 14:22:16.195466  656347 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 14:22:16.384723  656347 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 14:22:16.412438  656347 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 14:22:16.412531  656347 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 14:22:16.440676  656347 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 14:22:16.440703  656347 start.go:494] detecting cgroup driver to use...
	I0520 14:22:16.440784  656347 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 14:22:16.490220  656347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 14:22:16.521876  656347 docker.go:217] disabling cri-docker service (if available) ...
	I0520 14:22:16.521954  656347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 14:22:16.548998  656347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 14:22:16.601518  656347 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 14:22:16.804104  656347 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 14:22:16.983717  656347 docker.go:233] disabling docker service ...
	I0520 14:22:16.983794  656347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 14:22:17.004985  656347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 14:22:17.024966  656347 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 14:22:17.233061  656347 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 14:22:17.426445  656347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 14:22:17.441817  656347 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 14:22:17.461971  656347 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 14:22:17.462050  656347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:17.476664  656347 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 14:22:17.476727  656347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:17.491558  656347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:17.506498  656347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:17.517864  656347 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 14:22:17.530588  656347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:17.541486  656347 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:17.554534  656347 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:17.565509  656347 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 14:22:17.576458  656347 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 14:22:17.585883  656347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:22:17.764694  656347 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 14:22:17.403044  655980 pod_ready.go:102] pod "etcd-pause-462644" in "kube-system" namespace has status "Ready":"False"
	I0520 14:22:19.902145  655980 pod_ready.go:102] pod "etcd-pause-462644" in "kube-system" namespace has status "Ready":"False"
	I0520 14:22:21.400526  655980 pod_ready.go:92] pod "etcd-pause-462644" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:21.400549  655980 pod_ready.go:81] duration metric: took 10.504918414s for pod "etcd-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:21.400559  655980 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:22.907501  655980 pod_ready.go:92] pod "kube-apiserver-pause-462644" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:22.907522  655980 pod_ready.go:81] duration metric: took 1.50695678s for pod "kube-apiserver-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:22.907532  655980 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:22.912294  655980 pod_ready.go:92] pod "kube-controller-manager-pause-462644" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:22.912313  655980 pod_ready.go:81] duration metric: took 4.774877ms for pod "kube-controller-manager-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:22.912322  655980 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sdp6h" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:22.918617  655980 pod_ready.go:92] pod "kube-proxy-sdp6h" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:22.918643  655980 pod_ready.go:81] duration metric: took 6.315353ms for pod "kube-proxy-sdp6h" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:22.918651  655980 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:22.922505  655980 pod_ready.go:92] pod "kube-scheduler-pause-462644" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:22.922526  655980 pod_ready.go:81] duration metric: took 3.868104ms for pod "kube-scheduler-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:22.922533  655980 pod_ready.go:38] duration metric: took 12.546229083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 14:22:22.922552  655980 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 14:22:22.935943  655980 ops.go:34] apiserver oom_adj: -16
	I0520 14:22:22.935964  655980 kubeadm.go:591] duration metric: took 19.585395822s to restartPrimaryControlPlane
	I0520 14:22:22.935974  655980 kubeadm.go:393] duration metric: took 19.685229261s to StartCluster
	I0520 14:22:22.935993  655980 settings.go:142] acquiring lock: {Name:mkfd2b5ade573ba8a1562334a87bc71c9f86de47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:22:22.936081  655980 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 14:22:22.939711  655980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/kubeconfig: {Name:mkf2ddc28a03db07b0a9823212c8450f3ae4cfa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:22:22.940241  655980 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 14:22:22.943079  655980 out.go:177] * Verifying Kubernetes components...
	I0520 14:22:22.940407  655980 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 14:22:22.940612  655980 config.go:182] Loaded profile config "pause-462644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:22:22.945433  655980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:22:22.947670  655980 out.go:177] * Enabled addons: 
	I0520 14:22:22.950076  655980 addons.go:505] duration metric: took 9.687834ms for enable addons: enabled=[]
	I0520 14:22:23.100411  655980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 14:22:23.115835  655980 node_ready.go:35] waiting up to 6m0s for node "pause-462644" to be "Ready" ...
	I0520 14:22:23.118647  655980 node_ready.go:49] node "pause-462644" has status "Ready":"True"
	I0520 14:22:23.118667  655980 node_ready.go:38] duration metric: took 2.790329ms for node "pause-462644" to be "Ready" ...
	I0520 14:22:23.118675  655980 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 14:22:23.123170  655980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lvxbz" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:23.399049  655980 pod_ready.go:92] pod "coredns-7db6d8ff4d-lvxbz" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:23.399085  655980 pod_ready.go:81] duration metric: took 275.894569ms for pod "coredns-7db6d8ff4d-lvxbz" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:23.399096  655980 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:23.800279  655980 pod_ready.go:92] pod "etcd-pause-462644" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:23.800310  655980 pod_ready.go:81] duration metric: took 401.207937ms for pod "etcd-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:23.800322  655980 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:24.199136  655980 pod_ready.go:92] pod "kube-apiserver-pause-462644" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:24.199176  655980 pod_ready.go:81] duration metric: took 398.833905ms for pod "kube-apiserver-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:24.199191  655980 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:24.600201  655980 pod_ready.go:92] pod "kube-controller-manager-pause-462644" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:24.600229  655980 pod_ready.go:81] duration metric: took 401.028998ms for pod "kube-controller-manager-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:24.600245  655980 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sdp6h" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:24.999720  655980 pod_ready.go:92] pod "kube-proxy-sdp6h" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:24.999745  655980 pod_ready.go:81] duration metric: took 399.492964ms for pod "kube-proxy-sdp6h" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:24.999755  655980 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:25.399247  655980 pod_ready.go:92] pod "kube-scheduler-pause-462644" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:25.399279  655980 pod_ready.go:81] duration metric: took 399.516266ms for pod "kube-scheduler-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:25.399290  655980 pod_ready.go:38] duration metric: took 2.280604318s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 14:22:25.399310  655980 api_server.go:52] waiting for apiserver process to appear ...
	I0520 14:22:25.399376  655980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 14:22:25.412638  655980 api_server.go:72] duration metric: took 2.472352557s to wait for apiserver process to appear ...
	I0520 14:22:25.412662  655980 api_server.go:88] waiting for apiserver healthz status ...
	I0520 14:22:25.412682  655980 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0520 14:22:25.416800  655980 api_server.go:279] https://192.168.50.77:8443/healthz returned 200:
	ok
	I0520 14:22:25.417912  655980 api_server.go:141] control plane version: v1.30.1
	I0520 14:22:25.417961  655980 api_server.go:131] duration metric: took 5.291014ms to wait for apiserver health ...
	I0520 14:22:25.417972  655980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 14:22:25.601856  655980 system_pods.go:59] 6 kube-system pods found
	I0520 14:22:25.601888  655980 system_pods.go:61] "coredns-7db6d8ff4d-lvxbz" [c4c57d06-48a2-4e2e-a5b3-43137c113173] Running
	I0520 14:22:25.601893  655980 system_pods.go:61] "etcd-pause-462644" [4ad49e79-7b2a-4f37-b97d-ef871b9d0b16] Running
	I0520 14:22:25.601896  655980 system_pods.go:61] "kube-apiserver-pause-462644" [b64c449a-053f-4d9a-93de-1603a8ae1cb7] Running
	I0520 14:22:25.601900  655980 system_pods.go:61] "kube-controller-manager-pause-462644" [63ece984-d10c-472b-bea7-c02aa8dcbd17] Running
	I0520 14:22:25.601902  655980 system_pods.go:61] "kube-proxy-sdp6h" [b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e] Running
	I0520 14:22:25.601905  655980 system_pods.go:61] "kube-scheduler-pause-462644" [ac70ec8d-62fe-484d-a7ad-ee7f21d938a6] Running
	I0520 14:22:25.601912  655980 system_pods.go:74] duration metric: took 183.925149ms to wait for pod list to return data ...
	I0520 14:22:25.601919  655980 default_sa.go:34] waiting for default service account to be created ...
	I0520 14:22:25.799895  655980 default_sa.go:45] found service account: "default"
	I0520 14:22:25.799944  655980 default_sa.go:55] duration metric: took 198.016228ms for default service account to be created ...
	I0520 14:22:25.799959  655980 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 14:22:26.001478  655980 system_pods.go:86] 6 kube-system pods found
	I0520 14:22:26.001510  655980 system_pods.go:89] "coredns-7db6d8ff4d-lvxbz" [c4c57d06-48a2-4e2e-a5b3-43137c113173] Running
	I0520 14:22:26.001516  655980 system_pods.go:89] "etcd-pause-462644" [4ad49e79-7b2a-4f37-b97d-ef871b9d0b16] Running
	I0520 14:22:26.001521  655980 system_pods.go:89] "kube-apiserver-pause-462644" [b64c449a-053f-4d9a-93de-1603a8ae1cb7] Running
	I0520 14:22:26.001525  655980 system_pods.go:89] "kube-controller-manager-pause-462644" [63ece984-d10c-472b-bea7-c02aa8dcbd17] Running
	I0520 14:22:26.001529  655980 system_pods.go:89] "kube-proxy-sdp6h" [b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e] Running
	I0520 14:22:26.001535  655980 system_pods.go:89] "kube-scheduler-pause-462644" [ac70ec8d-62fe-484d-a7ad-ee7f21d938a6] Running
	I0520 14:22:26.001542  655980 system_pods.go:126] duration metric: took 201.575237ms to wait for k8s-apps to be running ...
	I0520 14:22:26.001549  655980 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 14:22:26.001598  655980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 14:22:26.018317  655980 system_svc.go:56] duration metric: took 16.75747ms WaitForService to wait for kubelet
	I0520 14:22:26.018353  655980 kubeadm.go:576] duration metric: took 3.078068872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 14:22:26.018377  655980 node_conditions.go:102] verifying NodePressure condition ...
	I0520 14:22:26.201060  655980 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 14:22:26.201089  655980 node_conditions.go:123] node cpu capacity is 2
	I0520 14:22:26.201106  655980 node_conditions.go:105] duration metric: took 182.720739ms to run NodePressure ...
	I0520 14:22:26.201121  655980 start.go:240] waiting for startup goroutines ...
	I0520 14:22:26.201130  655980 start.go:245] waiting for cluster config update ...
	I0520 14:22:26.201139  655980 start.go:254] writing updated cluster config ...
	I0520 14:22:26.201446  655980 ssh_runner.go:195] Run: rm -f paused
	I0520 14:22:26.255349  655980 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 14:22:26.259714  655980 out.go:177] * Done! kubectl is now configured to use "pause-462644" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 14:22:26 pause-462644 crio[2955]: time="2024-05-20 14:22:26.949299371Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81bcc08c-0312-4a26-8819-7d0a60225979 name=/runtime.v1.RuntimeService/Version
	May 20 14:22:26 pause-462644 crio[2955]: time="2024-05-20 14:22:26.950609381Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc989729-431c-4c02-8b30-14bf3314230f name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:22:26 pause-462644 crio[2955]: time="2024-05-20 14:22:26.951020870Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716214946950995546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc989729-431c-4c02-8b30-14bf3314230f name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:22:26 pause-462644 crio[2955]: time="2024-05-20 14:22:26.951679102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97c6e1f1-9d40-4092-8e89-f6f22ee197b2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:26 pause-462644 crio[2955]: time="2024-05-20 14:22:26.951749173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97c6e1f1-9d40-4092-8e89-f6f22ee197b2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:26 pause-462644 crio[2955]: time="2024-05-20 14:22:26.952046543Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a82d3960ce4a3d912baedf31aa74725e985046a35acf3657601d05a980a0dcc4,PodSandboxId:7a2a923454bf49677bb63b807f135cd97bdcaa79f5f7ff9d30b255b7d93d0e08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716214929240848674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: a45ec048,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edf8b2bea29e43cfa38c59a51087f395b1bd9b5f2b7d715d19df0e9022406880,PodSandboxId:bae92b5f79db228a1f840d39d02179a85a116c090afc92bd7a24df6026a3087e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716214929248853644,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lvxbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c57d06-48a2-4e2e-a5b3-43137c113173,},Annotations:map[string]string{io.kubernetes.container.hash: 93fa6209,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c170dd178efb899d62ae1ac9f7322b5caf5a5a374102e479aa9bcfc77974e3c8,PodSandboxId:1089b4ab133d4e4ec380bc81039e8be66177048fa9ec43363baad52e1a3d2cf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716214925378516952,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4774dba2e2ddf7ca5c0cb91bdc857d36,},Annot
ations:map[string]string{io.kubernetes.container.hash: ade9efe7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537eebcf1f781f270bc7b2a240ecc321a1489c4ab6cd8dcee2de8dccd536974d,PodSandboxId:aad62cf6617740589e54b09bc58e3f363596b8826fff65c46f77f086189ba3e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716214925393273432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37daddb58d1a1c4ab2357105ef62e629,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b43a1de818ae00812f825cceb04835cfe92f7a20552625bae0cfb20051f6259,PodSandboxId:e4e8ce98ebd6cbba973eb4684407d00f3ae0105ee1178f63ee16de39f60f01b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716214925398525002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037ddd3e650f1034e14aad84269dad2a,},Annotations:map[string]string{io.kubernet
es.container.hash: 99044ee8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1e420e7e507a5fec22eea7b5cf0b1829108f11b82d91affdee00c5b4a8bbd5,PodSandboxId:c2c3db5757ef17d16015aef562602d086a436f994ff8697cdf1b009767a0f767,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716214923553416221,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04da59a6741a964d7103cc185a3e5291,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d5098f42101b5e1aa1bf3e04549d67a79b2f9f4576b0b8300c7959485b0a45,PodSandboxId:41361e71b2d39d4740fe4cb88e3b66ae372aef929f85e191f324cf4f2442e111,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716214921009460377,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: a45ec0
48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9b29b2f9ad29ab12a236311ac9c866b337cdc4a2d02ab40551cb79498d5aae,PodSandboxId:dea2acba935211815453c2ff8bec8f400b9d141e6090c7f772d1ee9f62fe1e97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716214920965157432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lvxbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c57d06-48a2-4e2e-a5b3-43137c113173,},Annotations:map[string]string{io.kubernetes.container.hash: 93fa6209,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7dd00fffc660b262d5e304aee397db22bf4ab48951c778a11c1f238e209ac4,PodSandboxId:c303ab62fdb17789962c5d9bfe8763c29a5d1e427aa86be469ec9e41630e1e82,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716214921003540449,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037ddd3e650f1034e14aad84269dad2a,},Annotations:map[string]string{io.kubernetes.container.hash: 99044ee8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3083106faea64ffdba84f12ca33edeab1ad16d4a665d2417c40cb79a050666c9,PodSandboxId:f6b25b034d0aa241c014f43296f453432c2c3ecad12861a60e4881e0e0fda018,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716214920632493590,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-462644,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4774dba2e2ddf7ca5c0cb91bdc857d36,},Annotations:map[string]string{io.kubernetes.container.hash: ade9efe7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe6dbb008f6fc5323e4845861bb374a312f293310cc54fe983484f88333a0d1,PodSandboxId:d25cdb4534875efa5cb5721c9cc48b59680458008a03f11f39fba2fbcace29a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716214920340571341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 37daddb58d1a1c4ab2357105ef62e629,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5095f4a6930eb5613f5aff1ab099b5e94070cb42605a0f83edc2391266da3153,PodSandboxId:24b43c9edaef7d7ae37453e75fa07387f9c8897e693b5bc990493661d44bd8f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716214869841976878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 04da59a6741a964d7103cc185a3e5291,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97c6e1f1-9d40-4092-8e89-f6f22ee197b2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:26 pause-462644 crio[2955]: time="2024-05-20 14:22:26.959527509Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=156a4c53-f4bb-4a38-a9fc-b227ed1ccf0c name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 14:22:26 pause-462644 crio[2955]: time="2024-05-20 14:22:26.959704148Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:bae92b5f79db228a1f840d39d02179a85a116c090afc92bd7a24df6026a3087e,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-lvxbz,Uid:c4c57d06-48a2-4e2e-a5b3-43137c113173,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1716214922756987234,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-lvxbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c57d06-48a2-4e2e-a5b3-43137c113173,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T14:21:28.815806433Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7a2a923454bf49677bb63b807f135cd97bdcaa79f5f7ff9d30b255b7d93d0e08,Metadata:&PodSandboxMetadata{Name:kube-proxy-sdp6h,Uid:b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1716214922647738693,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-sdp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T14:21:28.638514865Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e4e8ce98ebd6cbba973eb4684407d00f3ae0105ee1178f63ee16de39f60f01b7,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-462644,Uid:037ddd3e650f1034e14aad84269dad2a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1716214922587522967,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037ddd3e650f1034e14aad84269dad2a,tier: control-plane,},Annotations:map[string
]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.77:8443,kubernetes.io/config.hash: 037ddd3e650f1034e14aad84269dad2a,kubernetes.io/config.seen: 2024-05-20T14:21:15.111562858Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1089b4ab133d4e4ec380bc81039e8be66177048fa9ec43363baad52e1a3d2cf7,Metadata:&PodSandboxMetadata{Name:etcd-pause-462644,Uid:4774dba2e2ddf7ca5c0cb91bdc857d36,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1716214922562406735,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4774dba2e2ddf7ca5c0cb91bdc857d36,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.77:2379,kubernetes.io/config.hash: 4774dba2e2ddf7ca5c0cb91bdc857d36,kubernetes.io/config.seen: 2024-05-20T14:21:15.111558739Z,kubernetes.io/config.source: file,},RuntimeHan
dler:,},&PodSandbox{Id:aad62cf6617740589e54b09bc58e3f363596b8826fff65c46f77f086189ba3e5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-462644,Uid:37daddb58d1a1c4ab2357105ef62e629,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1716214922530263377,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37daddb58d1a1c4ab2357105ef62e629,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 37daddb58d1a1c4ab2357105ef62e629,kubernetes.io/config.seen: 2024-05-20T14:21:15.111564857Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c2c3db5757ef17d16015aef562602d086a436f994ff8697cdf1b009767a0f767,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-462644,Uid:04da59a6741a964d7103cc185a3e5291,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1716214922522494402,Labels:map[string]strin
g{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04da59a6741a964d7103cc185a3e5291,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 04da59a6741a964d7103cc185a3e5291,kubernetes.io/config.seen: 2024-05-20T14:21:15.111564055Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=156a4c53-f4bb-4a38-a9fc-b227ed1ccf0c name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 14:22:26 pause-462644 crio[2955]: time="2024-05-20 14:22:26.960253539Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d388c9f6-9cde-4685-ad12-601eb54154bc name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:26 pause-462644 crio[2955]: time="2024-05-20 14:22:26.960337780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d388c9f6-9cde-4685-ad12-601eb54154bc name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:26 pause-462644 crio[2955]: time="2024-05-20 14:22:26.960471083Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a82d3960ce4a3d912baedf31aa74725e985046a35acf3657601d05a980a0dcc4,PodSandboxId:7a2a923454bf49677bb63b807f135cd97bdcaa79f5f7ff9d30b255b7d93d0e08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716214929240848674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: a45ec048,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edf8b2bea29e43cfa38c59a51087f395b1bd9b5f2b7d715d19df0e9022406880,PodSandboxId:bae92b5f79db228a1f840d39d02179a85a116c090afc92bd7a24df6026a3087e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716214929248853644,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lvxbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c57d06-48a2-4e2e-a5b3-43137c113173,},Annotations:map[string]string{io.kubernetes.container.hash: 93fa6209,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c170dd178efb899d62ae1ac9f7322b5caf5a5a374102e479aa9bcfc77974e3c8,PodSandboxId:1089b4ab133d4e4ec380bc81039e8be66177048fa9ec43363baad52e1a3d2cf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716214925378516952,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4774dba2e2ddf7ca5c0cb91bdc857d36,},Annot
ations:map[string]string{io.kubernetes.container.hash: ade9efe7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537eebcf1f781f270bc7b2a240ecc321a1489c4ab6cd8dcee2de8dccd536974d,PodSandboxId:aad62cf6617740589e54b09bc58e3f363596b8826fff65c46f77f086189ba3e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716214925393273432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37daddb58d1a1c4ab2357105ef62e629,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b43a1de818ae00812f825cceb04835cfe92f7a20552625bae0cfb20051f6259,PodSandboxId:e4e8ce98ebd6cbba973eb4684407d00f3ae0105ee1178f63ee16de39f60f01b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716214925398525002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037ddd3e650f1034e14aad84269dad2a,},Annotations:map[string]string{io.kubernet
es.container.hash: 99044ee8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1e420e7e507a5fec22eea7b5cf0b1829108f11b82d91affdee00c5b4a8bbd5,PodSandboxId:c2c3db5757ef17d16015aef562602d086a436f994ff8697cdf1b009767a0f767,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716214923553416221,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04da59a6741a964d7103cc185a3e5291,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d388c9f6-9cde-4685-ad12-601eb54154bc name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:27 pause-462644 crio[2955]: time="2024-05-20 14:22:27.000695724Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=80da5463-a9d4-485b-a80c-411040973f9c name=/runtime.v1.RuntimeService/Version
	May 20 14:22:27 pause-462644 crio[2955]: time="2024-05-20 14:22:27.000785344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=80da5463-a9d4-485b-a80c-411040973f9c name=/runtime.v1.RuntimeService/Version
	May 20 14:22:27 pause-462644 crio[2955]: time="2024-05-20 14:22:27.001659941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88925267-12a3-45d5-8d2a-3177aad04e49 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:22:27 pause-462644 crio[2955]: time="2024-05-20 14:22:27.002103524Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716214947002082050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88925267-12a3-45d5-8d2a-3177aad04e49 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:22:27 pause-462644 crio[2955]: time="2024-05-20 14:22:27.002574185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fae01a5-befc-4845-9034-8ef4da65c270 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:27 pause-462644 crio[2955]: time="2024-05-20 14:22:27.002638548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fae01a5-befc-4845-9034-8ef4da65c270 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:27 pause-462644 crio[2955]: time="2024-05-20 14:22:27.002882134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a82d3960ce4a3d912baedf31aa74725e985046a35acf3657601d05a980a0dcc4,PodSandboxId:7a2a923454bf49677bb63b807f135cd97bdcaa79f5f7ff9d30b255b7d93d0e08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716214929240848674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: a45ec048,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edf8b2bea29e43cfa38c59a51087f395b1bd9b5f2b7d715d19df0e9022406880,PodSandboxId:bae92b5f79db228a1f840d39d02179a85a116c090afc92bd7a24df6026a3087e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716214929248853644,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lvxbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c57d06-48a2-4e2e-a5b3-43137c113173,},Annotations:map[string]string{io.kubernetes.container.hash: 93fa6209,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c170dd178efb899d62ae1ac9f7322b5caf5a5a374102e479aa9bcfc77974e3c8,PodSandboxId:1089b4ab133d4e4ec380bc81039e8be66177048fa9ec43363baad52e1a3d2cf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716214925378516952,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4774dba2e2ddf7ca5c0cb91bdc857d36,},Annot
ations:map[string]string{io.kubernetes.container.hash: ade9efe7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537eebcf1f781f270bc7b2a240ecc321a1489c4ab6cd8dcee2de8dccd536974d,PodSandboxId:aad62cf6617740589e54b09bc58e3f363596b8826fff65c46f77f086189ba3e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716214925393273432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37daddb58d1a1c4ab2357105ef62e629,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b43a1de818ae00812f825cceb04835cfe92f7a20552625bae0cfb20051f6259,PodSandboxId:e4e8ce98ebd6cbba973eb4684407d00f3ae0105ee1178f63ee16de39f60f01b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716214925398525002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037ddd3e650f1034e14aad84269dad2a,},Annotations:map[string]string{io.kubernet
es.container.hash: 99044ee8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1e420e7e507a5fec22eea7b5cf0b1829108f11b82d91affdee00c5b4a8bbd5,PodSandboxId:c2c3db5757ef17d16015aef562602d086a436f994ff8697cdf1b009767a0f767,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716214923553416221,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04da59a6741a964d7103cc185a3e5291,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d5098f42101b5e1aa1bf3e04549d67a79b2f9f4576b0b8300c7959485b0a45,PodSandboxId:41361e71b2d39d4740fe4cb88e3b66ae372aef929f85e191f324cf4f2442e111,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716214921009460377,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: a45ec0
48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9b29b2f9ad29ab12a236311ac9c866b337cdc4a2d02ab40551cb79498d5aae,PodSandboxId:dea2acba935211815453c2ff8bec8f400b9d141e6090c7f772d1ee9f62fe1e97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716214920965157432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lvxbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c57d06-48a2-4e2e-a5b3-43137c113173,},Annotations:map[string]string{io.kubernetes.container.hash: 93fa6209,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7dd00fffc660b262d5e304aee397db22bf4ab48951c778a11c1f238e209ac4,PodSandboxId:c303ab62fdb17789962c5d9bfe8763c29a5d1e427aa86be469ec9e41630e1e82,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716214921003540449,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037ddd3e650f1034e14aad84269dad2a,},Annotations:map[string]string{io.kubernetes.container.hash: 99044ee8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3083106faea64ffdba84f12ca33edeab1ad16d4a665d2417c40cb79a050666c9,PodSandboxId:f6b25b034d0aa241c014f43296f453432c2c3ecad12861a60e4881e0e0fda018,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716214920632493590,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-462644,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4774dba2e2ddf7ca5c0cb91bdc857d36,},Annotations:map[string]string{io.kubernetes.container.hash: ade9efe7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe6dbb008f6fc5323e4845861bb374a312f293310cc54fe983484f88333a0d1,PodSandboxId:d25cdb4534875efa5cb5721c9cc48b59680458008a03f11f39fba2fbcace29a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716214920340571341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 37daddb58d1a1c4ab2357105ef62e629,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5095f4a6930eb5613f5aff1ab099b5e94070cb42605a0f83edc2391266da3153,PodSandboxId:24b43c9edaef7d7ae37453e75fa07387f9c8897e693b5bc990493661d44bd8f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716214869841976878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 04da59a6741a964d7103cc185a3e5291,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2fae01a5-befc-4845-9034-8ef4da65c270 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:27 pause-462644 crio[2955]: time="2024-05-20 14:22:27.054981165Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b1d8b68-53eb-4173-b2b0-b07c1a7ce019 name=/runtime.v1.RuntimeService/Version
	May 20 14:22:27 pause-462644 crio[2955]: time="2024-05-20 14:22:27.055361931Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b1d8b68-53eb-4173-b2b0-b07c1a7ce019 name=/runtime.v1.RuntimeService/Version
	May 20 14:22:27 pause-462644 crio[2955]: time="2024-05-20 14:22:27.057218138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd877a08-4770-43f5-90d5-44389f0ff1e1 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:22:27 pause-462644 crio[2955]: time="2024-05-20 14:22:27.057788012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716214947057759509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd877a08-4770-43f5-90d5-44389f0ff1e1 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:22:27 pause-462644 crio[2955]: time="2024-05-20 14:22:27.058481158Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=883ca9ee-e405-47a5-bcc4-24c08a2b69f3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:27 pause-462644 crio[2955]: time="2024-05-20 14:22:27.058578131Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=883ca9ee-e405-47a5-bcc4-24c08a2b69f3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:27 pause-462644 crio[2955]: time="2024-05-20 14:22:27.059035183Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a82d3960ce4a3d912baedf31aa74725e985046a35acf3657601d05a980a0dcc4,PodSandboxId:7a2a923454bf49677bb63b807f135cd97bdcaa79f5f7ff9d30b255b7d93d0e08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716214929240848674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: a45ec048,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edf8b2bea29e43cfa38c59a51087f395b1bd9b5f2b7d715d19df0e9022406880,PodSandboxId:bae92b5f79db228a1f840d39d02179a85a116c090afc92bd7a24df6026a3087e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716214929248853644,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lvxbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c57d06-48a2-4e2e-a5b3-43137c113173,},Annotations:map[string]string{io.kubernetes.container.hash: 93fa6209,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c170dd178efb899d62ae1ac9f7322b5caf5a5a374102e479aa9bcfc77974e3c8,PodSandboxId:1089b4ab133d4e4ec380bc81039e8be66177048fa9ec43363baad52e1a3d2cf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716214925378516952,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4774dba2e2ddf7ca5c0cb91bdc857d36,},Annot
ations:map[string]string{io.kubernetes.container.hash: ade9efe7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537eebcf1f781f270bc7b2a240ecc321a1489c4ab6cd8dcee2de8dccd536974d,PodSandboxId:aad62cf6617740589e54b09bc58e3f363596b8826fff65c46f77f086189ba3e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716214925393273432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37daddb58d1a1c4ab2357105ef62e629,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b43a1de818ae00812f825cceb04835cfe92f7a20552625bae0cfb20051f6259,PodSandboxId:e4e8ce98ebd6cbba973eb4684407d00f3ae0105ee1178f63ee16de39f60f01b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716214925398525002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037ddd3e650f1034e14aad84269dad2a,},Annotations:map[string]string{io.kubernet
es.container.hash: 99044ee8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1e420e7e507a5fec22eea7b5cf0b1829108f11b82d91affdee00c5b4a8bbd5,PodSandboxId:c2c3db5757ef17d16015aef562602d086a436f994ff8697cdf1b009767a0f767,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716214923553416221,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04da59a6741a964d7103cc185a3e5291,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d5098f42101b5e1aa1bf3e04549d67a79b2f9f4576b0b8300c7959485b0a45,PodSandboxId:41361e71b2d39d4740fe4cb88e3b66ae372aef929f85e191f324cf4f2442e111,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716214921009460377,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: a45ec0
48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9b29b2f9ad29ab12a236311ac9c866b337cdc4a2d02ab40551cb79498d5aae,PodSandboxId:dea2acba935211815453c2ff8bec8f400b9d141e6090c7f772d1ee9f62fe1e97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716214920965157432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lvxbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c57d06-48a2-4e2e-a5b3-43137c113173,},Annotations:map[string]string{io.kubernetes.container.hash: 93fa6209,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7dd00fffc660b262d5e304aee397db22bf4ab48951c778a11c1f238e209ac4,PodSandboxId:c303ab62fdb17789962c5d9bfe8763c29a5d1e427aa86be469ec9e41630e1e82,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716214921003540449,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037ddd3e650f1034e14aad84269dad2a,},Annotations:map[string]string{io.kubernetes.container.hash: 99044ee8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3083106faea64ffdba84f12ca33edeab1ad16d4a665d2417c40cb79a050666c9,PodSandboxId:f6b25b034d0aa241c014f43296f453432c2c3ecad12861a60e4881e0e0fda018,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716214920632493590,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-462644,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4774dba2e2ddf7ca5c0cb91bdc857d36,},Annotations:map[string]string{io.kubernetes.container.hash: ade9efe7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe6dbb008f6fc5323e4845861bb374a312f293310cc54fe983484f88333a0d1,PodSandboxId:d25cdb4534875efa5cb5721c9cc48b59680458008a03f11f39fba2fbcace29a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716214920340571341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 37daddb58d1a1c4ab2357105ef62e629,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5095f4a6930eb5613f5aff1ab099b5e94070cb42605a0f83edc2391266da3153,PodSandboxId:24b43c9edaef7d7ae37453e75fa07387f9c8897e693b5bc990493661d44bd8f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716214869841976878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 04da59a6741a964d7103cc185a3e5291,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=883ca9ee-e405-47a5-bcc4-24c08a2b69f3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	edf8b2bea29e4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 seconds ago       Running             coredns                   2                   bae92b5f79db2       coredns-7db6d8ff4d-lvxbz
	a82d3960ce4a3       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   17 seconds ago       Running             kube-proxy                2                   7a2a923454bf4       kube-proxy-sdp6h
	2b43a1de818ae       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   21 seconds ago       Running             kube-apiserver            2                   e4e8ce98ebd6c       kube-apiserver-pause-462644
	537eebcf1f781       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   21 seconds ago       Running             kube-scheduler            2                   aad62cf661774       kube-scheduler-pause-462644
	c170dd178efb8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   21 seconds ago       Running             etcd                      2                   1089b4ab133d4       etcd-pause-462644
	0e1e420e7e507       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   23 seconds ago       Running             kube-controller-manager   1                   c2c3db5757ef1       kube-controller-manager-pause-462644
	c3d5098f42101       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   26 seconds ago       Exited              kube-proxy                1                   41361e71b2d39       kube-proxy-sdp6h
	3f7dd00fffc66       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   26 seconds ago       Exited              kube-apiserver            1                   c303ab62fdb17       kube-apiserver-pause-462644
	4a9b29b2f9ad2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   26 seconds ago       Exited              coredns                   1                   dea2acba93521       coredns-7db6d8ff4d-lvxbz
	3083106faea64       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   26 seconds ago       Exited              etcd                      1                   f6b25b034d0aa       etcd-pause-462644
	abe6dbb008f6f       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   26 seconds ago       Exited              kube-scheduler            1                   d25cdb4534875       kube-scheduler-pause-462644
	5095f4a6930eb       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   About a minute ago   Exited              kube-controller-manager   0                   24b43c9edaef7       kube-controller-manager-pause-462644
	
	
	==> coredns [4a9b29b2f9ad29ab12a236311ac9c866b337cdc4a2d02ab40551cb79498d5aae] <==
	
	
	==> coredns [edf8b2bea29e43cfa38c59a51087f395b1bd9b5f2b7d715d19df0e9022406880] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54861 - 20363 "HINFO IN 4134098353068611736.7386086843332236561. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012697784s
	
	
	==> describe nodes <==
	Name:               pause-462644
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-462644
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=pause-462644
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T14_21_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 14:21:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-462644
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 14:22:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 14:22:08 +0000   Mon, 20 May 2024 14:21:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 14:22:08 +0000   Mon, 20 May 2024 14:21:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 14:22:08 +0000   Mon, 20 May 2024 14:21:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 14:22:08 +0000   Mon, 20 May 2024 14:21:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.77
	  Hostname:    pause-462644
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 205e3578e56e4223ac654bc599f0d9bc
	  System UUID:                205e3578-e56e-4223-ac65-4bc599f0d9bc
	  Boot ID:                    27f3c900-c699-4db8-a378-e794d94875d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-lvxbz                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     59s
	  kube-system                 etcd-pause-462644                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kube-apiserver-pause-462644             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-pause-462644    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-sdp6h                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-pause-462644             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 57s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node pause-462644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node pause-462644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node pause-462644 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    72s                kubelet          Node pause-462644 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  72s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  72s                kubelet          Node pause-462644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     72s                kubelet          Node pause-462644 status is now: NodeHasSufficientPID
	  Normal  Starting                 72s                kubelet          Starting kubelet.
	  Normal  NodeReady                71s                kubelet          Node pause-462644 status is now: NodeReady
	  Normal  RegisteredNode           60s                node-controller  Node pause-462644 event: Registered Node pause-462644 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-462644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-462644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-462644 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                 node-controller  Node pause-462644 event: Registered Node pause-462644 in Controller
	
	
	==> dmesg <==
	[  +0.070540] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061774] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.196407] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.132521] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[May20 14:21] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.264072] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.067329] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.059564] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.969379] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.082606] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.088441] kauditd_printk_skb: 15 callbacks suppressed
	[ +13.817736] systemd-fstab-generator[1498]: Ignoring "noauto" option for root device
	[  +0.155552] kauditd_printk_skb: 21 callbacks suppressed
	[ +10.206014] kauditd_printk_skb: 88 callbacks suppressed
	[ +20.235104] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.500829] systemd-fstab-generator[2444]: Ignoring "noauto" option for root device
	[  +0.214241] systemd-fstab-generator[2525]: Ignoring "noauto" option for root device
	[May20 14:22] systemd-fstab-generator[2664]: Ignoring "noauto" option for root device
	[  +0.187174] systemd-fstab-generator[2717]: Ignoring "noauto" option for root device
	[  +0.322587] systemd-fstab-generator[2768]: Ignoring "noauto" option for root device
	[  +1.305728] systemd-fstab-generator[3131]: Ignoring "noauto" option for root device
	[  +2.445494] systemd-fstab-generator[3586]: Ignoring "noauto" option for root device
	[  +0.105237] kauditd_printk_skb: 243 callbacks suppressed
	[ +16.346619] kauditd_printk_skb: 45 callbacks suppressed
	[  +1.875117] systemd-fstab-generator[4006]: Ignoring "noauto" option for root device
	
	
	==> etcd [3083106faea64ffdba84f12ca33edeab1ad16d4a665d2417c40cb79a050666c9] <==
	
	
	==> etcd [c170dd178efb899d62ae1ac9f7322b5caf5a5a374102e479aa9bcfc77974e3c8] <==
	{"level":"info","ts":"2024-05-20T14:22:05.808998Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T14:22:05.809037Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T14:22:05.813285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d05dba3721239311 switched to configuration voters=(15014361478665048849)"}
	{"level":"info","ts":"2024-05-20T14:22:05.813517Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c5fd294bb66e7a29","local-member-id":"d05dba3721239311","added-peer-id":"d05dba3721239311","added-peer-peer-urls":["https://192.168.50.77:2380"]}
	{"level":"info","ts":"2024-05-20T14:22:05.813753Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c5fd294bb66e7a29","local-member-id":"d05dba3721239311","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T14:22:05.81385Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T14:22:05.824273Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T14:22:05.824533Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d05dba3721239311","initial-advertise-peer-urls":["https://192.168.50.77:2380"],"listen-peer-urls":["https://192.168.50.77:2380"],"advertise-client-urls":["https://192.168.50.77:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.77:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T14:22:05.824579Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T14:22:05.824734Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.77:2380"}
	{"level":"info","ts":"2024-05-20T14:22:05.824755Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.77:2380"}
	{"level":"info","ts":"2024-05-20T14:22:06.86815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d05dba3721239311 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T14:22:06.868273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d05dba3721239311 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T14:22:06.868332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d05dba3721239311 received MsgPreVoteResp from d05dba3721239311 at term 2"}
	{"level":"info","ts":"2024-05-20T14:22:06.868366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d05dba3721239311 became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T14:22:06.86839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d05dba3721239311 received MsgVoteResp from d05dba3721239311 at term 3"}
	{"level":"info","ts":"2024-05-20T14:22:06.868417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d05dba3721239311 became leader at term 3"}
	{"level":"info","ts":"2024-05-20T14:22:06.868443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d05dba3721239311 elected leader d05dba3721239311 at term 3"}
	{"level":"info","ts":"2024-05-20T14:22:06.876281Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d05dba3721239311","local-member-attributes":"{Name:pause-462644 ClientURLs:[https://192.168.50.77:2379]}","request-path":"/0/members/d05dba3721239311/attributes","cluster-id":"c5fd294bb66e7a29","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T14:22:06.876393Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T14:22:06.879122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.77:2379"}
	{"level":"info","ts":"2024-05-20T14:22:06.8793Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T14:22:06.880033Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T14:22:06.880069Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T14:22:06.882885Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:22:27 up 1 min,  0 users,  load average: 1.80, 0.67, 0.24
	Linux pause-462644 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2b43a1de818ae00812f825cceb04835cfe92f7a20552625bae0cfb20051f6259] <==
	I0520 14:22:08.474535       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 14:22:08.475424       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 14:22:08.475722       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 14:22:08.476576       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 14:22:08.476677       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 14:22:08.478481       1 aggregator.go:165] initial CRD sync complete...
	I0520 14:22:08.478555       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 14:22:08.478579       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 14:22:08.478610       1 cache.go:39] Caches are synced for autoregister controller
	I0520 14:22:08.478794       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 14:22:08.478832       1 policy_source.go:224] refreshing policies
	I0520 14:22:08.485378       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 14:22:08.485786       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 14:22:08.497303       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 14:22:08.497454       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0520 14:22:08.498904       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0520 14:22:08.517087       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 14:22:09.390081       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 14:22:10.195278       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 14:22:10.208255       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 14:22:10.256626       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 14:22:10.299204       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 14:22:10.308692       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 14:22:21.107150       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 14:22:21.129898       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [3f7dd00fffc660b262d5e304aee397db22bf4ab48951c778a11c1f238e209ac4] <==
	
	
	==> kube-controller-manager [0e1e420e7e507a5fec22eea7b5cf0b1829108f11b82d91affdee00c5b4a8bbd5] <==
	I0520 14:22:21.101704       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0520 14:22:21.106251       1 shared_informer.go:320] Caches are synced for job
	I0520 14:22:21.106306       1 shared_informer.go:320] Caches are synced for TTL
	I0520 14:22:21.106270       1 shared_informer.go:320] Caches are synced for cronjob
	I0520 14:22:21.106294       1 shared_informer.go:320] Caches are synced for GC
	I0520 14:22:21.108821       1 shared_informer.go:320] Caches are synced for taint
	I0520 14:22:21.108951       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0520 14:22:21.109050       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-462644"
	I0520 14:22:21.109126       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0520 14:22:21.111558       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0520 14:22:21.115312       1 shared_informer.go:320] Caches are synced for PV protection
	I0520 14:22:21.117685       1 shared_informer.go:320] Caches are synced for endpoint
	I0520 14:22:21.119607       1 shared_informer.go:320] Caches are synced for deployment
	I0520 14:22:21.123110       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0520 14:22:21.127049       1 shared_informer.go:320] Caches are synced for crt configmap
	I0520 14:22:21.129612       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0520 14:22:21.225070       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0520 14:22:21.237179       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 14:22:21.244500       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 14:22:21.271265       1 shared_informer.go:320] Caches are synced for disruption
	I0520 14:22:21.279539       1 shared_informer.go:320] Caches are synced for attach detach
	I0520 14:22:21.292544       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0520 14:22:21.762215       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 14:22:21.785826       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 14:22:21.785860       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [5095f4a6930eb5613f5aff1ab099b5e94070cb42605a0f83edc2391266da3153] <==
	I0520 14:21:28.010087       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 14:21:28.028034       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 14:21:28.034848       1 shared_informer.go:320] Caches are synced for stateful set
	I0520 14:21:28.082321       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0520 14:21:28.091473       1 shared_informer.go:320] Caches are synced for attach detach
	I0520 14:21:28.564360       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 14:21:28.564456       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 14:21:28.588624       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 14:21:28.840374       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="338.051894ms"
	I0520 14:21:28.866608       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.183209ms"
	I0520 14:21:28.866788       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.519µs"
	I0520 14:21:28.878893       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.262µs"
	I0520 14:21:28.891512       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.866µs"
	I0520 14:21:28.917192       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="146.664µs"
	I0520 14:21:29.622488       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.248663ms"
	I0520 14:21:29.651428       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.746705ms"
	I0520 14:21:29.651568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.177µs"
	I0520 14:21:30.339540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.28µs"
	I0520 14:21:30.378715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="69.257µs"
	I0520 14:21:39.151854       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.038792ms"
	I0520 14:21:39.153268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="126.719µs"
	I0520 14:21:40.278126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.946µs"
	I0520 14:21:40.325732       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="176.907µs"
	I0520 14:21:40.626262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.286µs"
	I0520 14:21:40.631379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.108µs"
	
	
	==> kube-proxy [a82d3960ce4a3d912baedf31aa74725e985046a35acf3657601d05a980a0dcc4] <==
	I0520 14:22:09.433490       1 server_linux.go:69] "Using iptables proxy"
	I0520 14:22:09.450788       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.77"]
	I0520 14:22:09.513079       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 14:22:09.513129       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 14:22:09.513160       1 server_linux.go:165] "Using iptables Proxier"
	I0520 14:22:09.517129       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 14:22:09.517362       1 server.go:872] "Version info" version="v1.30.1"
	I0520 14:22:09.517498       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 14:22:09.518946       1 config.go:192] "Starting service config controller"
	I0520 14:22:09.519016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 14:22:09.519097       1 config.go:101] "Starting endpoint slice config controller"
	I0520 14:22:09.519122       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 14:22:09.519570       1 config.go:319] "Starting node config controller"
	I0520 14:22:09.522101       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 14:22:09.621002       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 14:22:09.621113       1 shared_informer.go:320] Caches are synced for service config
	I0520 14:22:09.622561       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c3d5098f42101b5e1aa1bf3e04549d67a79b2f9f4576b0b8300c7959485b0a45] <==
	
	
	==> kube-scheduler [537eebcf1f781f270bc7b2a240ecc321a1489c4ab6cd8dcee2de8dccd536974d] <==
	I0520 14:22:06.607705       1 serving.go:380] Generated self-signed cert in-memory
	W0520 14:22:08.450852       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 14:22:08.450960       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 14:22:08.450972       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 14:22:08.450982       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 14:22:08.467270       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 14:22:08.467315       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 14:22:08.469735       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 14:22:08.470015       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 14:22:08.470104       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 14:22:08.470063       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0520 14:22:08.571107       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [abe6dbb008f6fc5323e4845861bb374a312f293310cc54fe983484f88333a0d1] <==
	
	
	==> kubelet <==
	May 20 14:22:05 pause-462644 kubelet[3593]: I0520 14:22:05.363237    3593 scope.go:117] "RemoveContainer" containerID="3f7dd00fffc660b262d5e304aee397db22bf4ab48951c778a11c1f238e209ac4"
	May 20 14:22:05 pause-462644 kubelet[3593]: I0520 14:22:05.365282    3593 scope.go:117] "RemoveContainer" containerID="5095f4a6930eb5613f5aff1ab099b5e94070cb42605a0f83edc2391266da3153"
	May 20 14:22:05 pause-462644 kubelet[3593]: I0520 14:22:05.367247    3593 scope.go:117] "RemoveContainer" containerID="abe6dbb008f6fc5323e4845861bb374a312f293310cc54fe983484f88333a0d1"
	May 20 14:22:05 pause-462644 kubelet[3593]: E0520 14:22:05.532014    3593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-462644?timeout=10s\": dial tcp 192.168.50.77:8443: connect: connection refused" interval="800ms"
	May 20 14:22:05 pause-462644 kubelet[3593]: I0520 14:22:05.636715    3593 kubelet_node_status.go:73] "Attempting to register node" node="pause-462644"
	May 20 14:22:05 pause-462644 kubelet[3593]: E0520 14:22:05.637742    3593 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.77:8443: connect: connection refused" node="pause-462644"
	May 20 14:22:05 pause-462644 kubelet[3593]: W0520 14:22:05.725975    3593 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.77:8443: connect: connection refused
	May 20 14:22:05 pause-462644 kubelet[3593]: E0520 14:22:05.726069    3593 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.77:8443: connect: connection refused
	May 20 14:22:05 pause-462644 kubelet[3593]: W0520 14:22:05.727877    3593 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-462644&limit=500&resourceVersion=0": dial tcp 192.168.50.77:8443: connect: connection refused
	May 20 14:22:05 pause-462644 kubelet[3593]: E0520 14:22:05.728004    3593 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-462644&limit=500&resourceVersion=0": dial tcp 192.168.50.77:8443: connect: connection refused
	May 20 14:22:06 pause-462644 kubelet[3593]: I0520 14:22:06.439653    3593 kubelet_node_status.go:73] "Attempting to register node" node="pause-462644"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.536154    3593 kubelet_node_status.go:112] "Node was previously registered" node="pause-462644"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.536708    3593 kubelet_node_status.go:76] "Successfully registered node" node="pause-462644"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.538317    3593 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.540116    3593 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 20 14:22:08 pause-462644 kubelet[3593]: E0520 14:22:08.557510    3593 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-462644\" not found"
	May 20 14:22:08 pause-462644 kubelet[3593]: E0520 14:22:08.658221    3593 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-462644\" not found"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.918100    3593 apiserver.go:52] "Watching apiserver"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.921456    3593 topology_manager.go:215] "Topology Admit Handler" podUID="b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e" podNamespace="kube-system" podName="kube-proxy-sdp6h"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.921897    3593 topology_manager.go:215] "Topology Admit Handler" podUID="c4c57d06-48a2-4e2e-a5b3-43137c113173" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lvxbz"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.950447    3593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e-xtables-lock\") pod \"kube-proxy-sdp6h\" (UID: \"b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e\") " pod="kube-system/kube-proxy-sdp6h"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.951787    3593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e-lib-modules\") pod \"kube-proxy-sdp6h\" (UID: \"b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e\") " pod="kube-system/kube-proxy-sdp6h"
	May 20 14:22:09 pause-462644 kubelet[3593]: I0520 14:22:09.024254    3593 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 20 14:22:09 pause-462644 kubelet[3593]: I0520 14:22:09.223168    3593 scope.go:117] "RemoveContainer" containerID="c3d5098f42101b5e1aa1bf3e04549d67a79b2f9f4576b0b8300c7959485b0a45"
	May 20 14:22:09 pause-462644 kubelet[3593]: I0520 14:22:09.223885    3593 scope.go:117] "RemoveContainer" containerID="4a9b29b2f9ad29ab12a236311ac9c866b337cdc4a2d02ab40551cb79498d5aae"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-462644 -n pause-462644
helpers_test.go:261: (dbg) Run:  kubectl --context pause-462644 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-462644 -n pause-462644
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-462644 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-462644 logs -n 25: (1.455047483s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-862860 sudo                  | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | cri-dockerd --version                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo                  | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo                  | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo cat              | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo cat              | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo                  | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo                  | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo                  | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo find             | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-862860 sudo crio             | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-862860                       | cilium-862860             | jenkins | v1.33.1 | 20 May 24 14:20 UTC | 20 May 24 14:20 UTC |
	| start   | -p pause-462644 --memory=2048          | pause-462644              | jenkins | v1.33.1 | 20 May 24 14:20 UTC | 20 May 24 14:21 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-565318 ssh                | cert-options-565318       | jenkins | v1.33.1 | 20 May 24 14:20 UTC | 20 May 24 14:20 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-565318 -- sudo         | cert-options-565318       | jenkins | v1.33.1 | 20 May 24 14:20 UTC | 20 May 24 14:20 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-565318                 | cert-options-565318       | jenkins | v1.33.1 | 20 May 24 14:20 UTC | 20 May 24 14:20 UTC |
	| start   | -p auto-862860 --memory=3072           | auto-862860               | jenkins | v1.33.1 | 20 May 24 14:20 UTC | 20 May 24 14:22 UTC |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-366203           | kubernetes-upgrade-366203 | jenkins | v1.33.1 | 20 May 24 14:21 UTC | 20 May 24 14:21 UTC |
	| start   | -p kubernetes-upgrade-366203           | kubernetes-upgrade-366203 | jenkins | v1.33.1 | 20 May 24 14:21 UTC | 20 May 24 14:22 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-462644                        | pause-462644              | jenkins | v1.33.1 | 20 May 24 14:21 UTC | 20 May 24 14:22 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | -p auto-862860 pgrep -a                | auto-862860               | jenkins | v1.33.1 | 20 May 24 14:22 UTC | 20 May 24 14:22 UTC |
	|         | kubelet                                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-366203           | kubernetes-upgrade-366203 | jenkins | v1.33.1 | 20 May 24 14:22 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-366203           | kubernetes-upgrade-366203 | jenkins | v1.33.1 | 20 May 24 14:22 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | -p auto-862860 sudo cat                | auto-862860               | jenkins | v1.33.1 | 20 May 24 14:22 UTC | 20 May 24 14:22 UTC |
	|         | /etc/nsswitch.conf                     |                           |         |         |                     |                     |
	| ssh     | -p auto-862860 sudo cat                | auto-862860               | jenkins | v1.33.1 | 20 May 24 14:22 UTC | 20 May 24 14:22 UTC |
	|         | /etc/hosts                             |                           |         |         |                     |                     |
	| ssh     | -p auto-862860 sudo cat                | auto-862860               | jenkins | v1.33.1 | 20 May 24 14:22 UTC |                     |
	|         | /etc/resolv.conf                       |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 14:22:13
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 14:22:13.941220  656347 out.go:291] Setting OutFile to fd 1 ...
	I0520 14:22:13.941503  656347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 14:22:13.941514  656347 out.go:304] Setting ErrFile to fd 2...
	I0520 14:22:13.941521  656347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 14:22:13.941745  656347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 14:22:13.942318  656347 out.go:298] Setting JSON to false
	I0520 14:22:13.943307  656347 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14674,"bootTime":1716200260,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 14:22:13.943376  656347 start.go:139] virtualization: kvm guest
	I0520 14:22:13.946431  656347 out.go:177] * [kubernetes-upgrade-366203] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 14:22:13.948760  656347 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 14:22:13.948766  656347 notify.go:220] Checking for updates...
	I0520 14:22:13.951134  656347 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 14:22:13.953236  656347 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 14:22:13.955335  656347 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 14:22:13.957612  656347 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 14:22:13.959644  656347 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 14:22:13.962159  656347 config.go:182] Loaded profile config "kubernetes-upgrade-366203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:22:13.962673  656347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:22:13.962733  656347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:22:13.979901  656347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36511
	I0520 14:22:13.980381  656347 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:22:13.981062  656347 main.go:141] libmachine: Using API Version  1
	I0520 14:22:13.981091  656347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:22:13.981542  656347 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:22:13.981794  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:13.982100  656347 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 14:22:13.982403  656347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:22:13.982448  656347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:22:13.999756  656347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
	I0520 14:22:14.000275  656347 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:22:14.000835  656347 main.go:141] libmachine: Using API Version  1
	I0520 14:22:14.000859  656347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:22:14.001233  656347 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:22:14.001452  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:14.039896  656347 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 14:22:14.042241  656347 start.go:297] selected driver: kvm2
	I0520 14:22:14.042269  656347 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-366203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-366203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:22:14.042373  656347 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 14:22:14.043140  656347 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 14:22:14.043232  656347 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 14:22:14.061077  656347 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 14:22:14.061647  656347 cni.go:84] Creating CNI manager for ""
	I0520 14:22:14.061674  656347 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 14:22:14.061731  656347 start.go:340] cluster config:
	{Name:kubernetes-upgrade-366203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-366203 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 14:22:14.061881  656347 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 14:22:14.064914  656347 out.go:177] * Starting "kubernetes-upgrade-366203" primary control-plane node in "kubernetes-upgrade-366203" cluster
	I0520 14:22:14.067877  656347 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 14:22:14.067958  656347 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 14:22:14.067972  656347 cache.go:56] Caching tarball of preloaded images
	I0520 14:22:14.068054  656347 preload.go:173] Found /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 14:22:14.068068  656347 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 14:22:14.068185  656347 profile.go:143] Saving config to /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kubernetes-upgrade-366203/config.json ...
	I0520 14:22:14.068431  656347 start.go:360] acquireMachinesLock for kubernetes-upgrade-366203: {Name:mk7e0027944e960a89650b0ddf8b5422d490b021 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 14:22:14.068487  656347 start.go:364] duration metric: took 32.069µs to acquireMachinesLock for "kubernetes-upgrade-366203"
	I0520 14:22:14.068517  656347 start.go:96] Skipping create...Using existing machine configuration
	I0520 14:22:14.068529  656347 fix.go:54] fixHost starting: 
	I0520 14:22:14.068894  656347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 14:22:14.068922  656347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 14:22:14.086332  656347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37451
	I0520 14:22:14.086900  656347 main.go:141] libmachine: () Calling .GetVersion
	I0520 14:22:14.087562  656347 main.go:141] libmachine: Using API Version  1
	I0520 14:22:14.087598  656347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 14:22:14.087982  656347 main.go:141] libmachine: () Calling .GetMachineName
	I0520 14:22:14.088210  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:14.088451  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetState
	I0520 14:22:14.090170  656347 fix.go:112] recreateIfNeeded on kubernetes-upgrade-366203: state=Running err=<nil>
	W0520 14:22:14.090195  656347 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 14:22:14.092966  656347 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-366203" VM ...
	I0520 14:22:12.907015  655980 pod_ready.go:102] pod "etcd-pause-462644" in "kube-system" namespace has status "Ready":"False"
	I0520 14:22:15.402531  655980 pod_ready.go:102] pod "etcd-pause-462644" in "kube-system" namespace has status "Ready":"False"
	I0520 14:22:14.095156  656347 machine.go:94] provisionDockerMachine start ...
	I0520 14:22:14.095190  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:14.095458  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:14.098489  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.098986  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:14.099031  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.099322  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:14.099565  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.099725  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.103394  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:14.105355  656347 main.go:141] libmachine: Using SSH client type: native
	I0520 14:22:14.105578  656347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0520 14:22:14.105586  656347 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 14:22:14.234378  656347 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-366203
	
	I0520 14:22:14.234422  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetMachineName
	I0520 14:22:14.234735  656347 buildroot.go:166] provisioning hostname "kubernetes-upgrade-366203"
	I0520 14:22:14.234781  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetMachineName
	I0520 14:22:14.234981  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:14.238149  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.238665  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:14.238712  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.238831  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:14.239057  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.239232  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.239376  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:14.239567  656347 main.go:141] libmachine: Using SSH client type: native
	I0520 14:22:14.239738  656347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0520 14:22:14.239751  656347 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-366203 && echo "kubernetes-upgrade-366203" | sudo tee /etc/hostname
	I0520 14:22:14.362879  656347 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-366203
	
	I0520 14:22:14.362921  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:14.366183  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.366527  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:14.366558  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.366783  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:14.367006  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.367255  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.367432  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:14.367687  656347 main.go:141] libmachine: Using SSH client type: native
	I0520 14:22:14.367861  656347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0520 14:22:14.367880  656347 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-366203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-366203/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-366203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 14:22:14.478472  656347 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 14:22:14.478520  656347 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18929-602525/.minikube CaCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18929-602525/.minikube}
	I0520 14:22:14.478570  656347 buildroot.go:174] setting up certificates
	I0520 14:22:14.478581  656347 provision.go:84] configureAuth start
	I0520 14:22:14.478595  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetMachineName
	I0520 14:22:14.478867  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetIP
	I0520 14:22:14.481934  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.482423  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:14.482459  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.482707  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:14.485173  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.485560  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:14.485592  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.485955  656347 provision.go:143] copyHostCerts
	I0520 14:22:14.486030  656347 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem, removing ...
	I0520 14:22:14.486044  656347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem
	I0520 14:22:14.486117  656347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/ca.pem (1082 bytes)
	I0520 14:22:14.486237  656347 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem, removing ...
	I0520 14:22:14.486247  656347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem
	I0520 14:22:14.486278  656347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/cert.pem (1123 bytes)
	I0520 14:22:14.486351  656347 exec_runner.go:144] found /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem, removing ...
	I0520 14:22:14.486361  656347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem
	I0520 14:22:14.486389  656347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18929-602525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18929-602525/.minikube/key.pem (1679 bytes)
	I0520 14:22:14.486459  656347 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-366203 san=[127.0.0.1 192.168.39.196 kubernetes-upgrade-366203 localhost minikube]
	I0520 14:22:14.730405  656347 provision.go:177] copyRemoteCerts
	I0520 14:22:14.730484  656347 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 14:22:14.730516  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:14.733650  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.734119  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:14.734152  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.734441  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:14.734733  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.734920  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:14.735079  656347 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa Username:docker}
	I0520 14:22:14.816564  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 14:22:14.845370  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0520 14:22:14.879388  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 14:22:14.910491  656347 provision.go:87] duration metric: took 431.891438ms to configureAuth
	I0520 14:22:14.910531  656347 buildroot.go:189] setting minikube options for container-runtime
	I0520 14:22:14.910715  656347 config.go:182] Loaded profile config "kubernetes-upgrade-366203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:22:14.910794  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:14.913774  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.914159  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:14.914194  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:14.914329  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:14.914574  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.914786  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:14.914935  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:14.915092  656347 main.go:141] libmachine: Using SSH client type: native
	I0520 14:22:14.915305  656347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0520 14:22:14.915326  656347 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 14:22:15.840762  656347 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 14:22:15.840800  656347 machine.go:97] duration metric: took 1.745626869s to provisionDockerMachine
	I0520 14:22:15.840815  656347 start.go:293] postStartSetup for "kubernetes-upgrade-366203" (driver="kvm2")
	I0520 14:22:15.840829  656347 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 14:22:15.840854  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:15.841232  656347 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 14:22:15.841285  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:15.843888  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:15.844260  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:15.844291  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:15.844504  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:15.844724  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:15.844921  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:15.845112  656347 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa Username:docker}
	I0520 14:22:15.928299  656347 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 14:22:15.932824  656347 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 14:22:15.932856  656347 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/addons for local assets ...
	I0520 14:22:15.932938  656347 filesync.go:126] Scanning /home/jenkins/minikube-integration/18929-602525/.minikube/files for local assets ...
	I0520 14:22:15.933022  656347 filesync.go:149] local asset: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem -> 6098672.pem in /etc/ssl/certs
	I0520 14:22:15.933113  656347 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 14:22:15.942657  656347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/ssl/certs/6098672.pem --> /etc/ssl/certs/6098672.pem (1708 bytes)
	I0520 14:22:15.966705  656347 start.go:296] duration metric: took 125.870507ms for postStartSetup
	I0520 14:22:15.966764  656347 fix.go:56] duration metric: took 1.898235402s for fixHost
	I0520 14:22:15.966789  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:15.969884  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:15.970274  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:15.970321  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:15.970496  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:15.970709  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:15.970861  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:15.970953  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:15.971117  656347 main.go:141] libmachine: Using SSH client type: native
	I0520 14:22:15.971289  656347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0520 14:22:15.971301  656347 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 14:22:16.073754  656347 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716214936.067341918
	
	I0520 14:22:16.073781  656347 fix.go:216] guest clock: 1716214936.067341918
	I0520 14:22:16.073790  656347 fix.go:229] Guest: 2024-05-20 14:22:16.067341918 +0000 UTC Remote: 2024-05-20 14:22:15.966769641 +0000 UTC m=+2.065791293 (delta=100.572277ms)
	I0520 14:22:16.073819  656347 fix.go:200] guest clock delta is within tolerance: 100.572277ms
	I0520 14:22:16.073825  656347 start.go:83] releasing machines lock for "kubernetes-upgrade-366203", held for 2.005324672s
	I0520 14:22:16.073852  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:16.074205  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetIP
	I0520 14:22:16.077047  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:16.077383  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:16.077415  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:16.077547  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:16.078070  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:16.078266  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .DriverName
	I0520 14:22:16.078380  656347 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 14:22:16.078462  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:16.078508  656347 ssh_runner.go:195] Run: cat /version.json
	I0520 14:22:16.078534  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHHostname
	I0520 14:22:16.081416  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:16.081644  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:16.081916  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:16.081971  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:16.082005  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:db:54", ip: ""} in network mk-kubernetes-upgrade-366203: {Iface:virbr2 ExpiryTime:2024-05-20 15:21:42 +0000 UTC Type:0 Mac:52:54:00:83:db:54 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-366203 Clientid:01:52:54:00:83:db:54}
	I0520 14:22:16.082023  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) DBG | domain kubernetes-upgrade-366203 has defined IP address 192.168.39.196 and MAC address 52:54:00:83:db:54 in network mk-kubernetes-upgrade-366203
	I0520 14:22:16.082081  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:16.082298  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHPort
	I0520 14:22:16.082310  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:16.082507  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:16.082527  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHKeyPath
	I0520 14:22:16.082692  656347 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa Username:docker}
	I0520 14:22:16.082769  656347 main.go:141] libmachine: (kubernetes-upgrade-366203) Calling .GetSSHUsername
	I0520 14:22:16.082917  656347 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/kubernetes-upgrade-366203/id_rsa Username:docker}
	W0520 14:22:16.187881  656347 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 14:22:16.187979  656347 ssh_runner.go:195] Run: systemctl --version
	I0520 14:22:16.195466  656347 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 14:22:16.384723  656347 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 14:22:16.412438  656347 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 14:22:16.412531  656347 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 14:22:16.440676  656347 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 14:22:16.440703  656347 start.go:494] detecting cgroup driver to use...
	I0520 14:22:16.440784  656347 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 14:22:16.490220  656347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 14:22:16.521876  656347 docker.go:217] disabling cri-docker service (if available) ...
	I0520 14:22:16.521954  656347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 14:22:16.548998  656347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 14:22:16.601518  656347 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 14:22:16.804104  656347 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 14:22:16.983717  656347 docker.go:233] disabling docker service ...
	I0520 14:22:16.983794  656347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 14:22:17.004985  656347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 14:22:17.024966  656347 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 14:22:17.233061  656347 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 14:22:17.426445  656347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 14:22:17.441817  656347 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 14:22:17.461971  656347 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 14:22:17.462050  656347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:17.476664  656347 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 14:22:17.476727  656347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:17.491558  656347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:17.506498  656347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:17.517864  656347 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 14:22:17.530588  656347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:17.541486  656347 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:17.554534  656347 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 14:22:17.565509  656347 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 14:22:17.576458  656347 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 14:22:17.585883  656347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:22:17.764694  656347 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 14:22:17.403044  655980 pod_ready.go:102] pod "etcd-pause-462644" in "kube-system" namespace has status "Ready":"False"
	I0520 14:22:19.902145  655980 pod_ready.go:102] pod "etcd-pause-462644" in "kube-system" namespace has status "Ready":"False"
	I0520 14:22:21.400526  655980 pod_ready.go:92] pod "etcd-pause-462644" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:21.400549  655980 pod_ready.go:81] duration metric: took 10.504918414s for pod "etcd-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:21.400559  655980 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:22.907501  655980 pod_ready.go:92] pod "kube-apiserver-pause-462644" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:22.907522  655980 pod_ready.go:81] duration metric: took 1.50695678s for pod "kube-apiserver-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:22.907532  655980 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:22.912294  655980 pod_ready.go:92] pod "kube-controller-manager-pause-462644" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:22.912313  655980 pod_ready.go:81] duration metric: took 4.774877ms for pod "kube-controller-manager-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:22.912322  655980 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sdp6h" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:22.918617  655980 pod_ready.go:92] pod "kube-proxy-sdp6h" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:22.918643  655980 pod_ready.go:81] duration metric: took 6.315353ms for pod "kube-proxy-sdp6h" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:22.918651  655980 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:22.922505  655980 pod_ready.go:92] pod "kube-scheduler-pause-462644" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:22.922526  655980 pod_ready.go:81] duration metric: took 3.868104ms for pod "kube-scheduler-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:22.922533  655980 pod_ready.go:38] duration metric: took 12.546229083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 14:22:22.922552  655980 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 14:22:22.935943  655980 ops.go:34] apiserver oom_adj: -16
	I0520 14:22:22.935964  655980 kubeadm.go:591] duration metric: took 19.585395822s to restartPrimaryControlPlane
	I0520 14:22:22.935974  655980 kubeadm.go:393] duration metric: took 19.685229261s to StartCluster
	I0520 14:22:22.935993  655980 settings.go:142] acquiring lock: {Name:mkfd2b5ade573ba8a1562334a87bc71c9f86de47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:22:22.936081  655980 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 14:22:22.939711  655980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18929-602525/kubeconfig: {Name:mkf2ddc28a03db07b0a9823212c8450f3ae4cfa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 14:22:22.940241  655980 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 14:22:22.943079  655980 out.go:177] * Verifying Kubernetes components...
	I0520 14:22:22.940407  655980 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 14:22:22.940612  655980 config.go:182] Loaded profile config "pause-462644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 14:22:22.945433  655980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 14:22:22.947670  655980 out.go:177] * Enabled addons: 
	I0520 14:22:22.950076  655980 addons.go:505] duration metric: took 9.687834ms for enable addons: enabled=[]
	I0520 14:22:23.100411  655980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 14:22:23.115835  655980 node_ready.go:35] waiting up to 6m0s for node "pause-462644" to be "Ready" ...
	I0520 14:22:23.118647  655980 node_ready.go:49] node "pause-462644" has status "Ready":"True"
	I0520 14:22:23.118667  655980 node_ready.go:38] duration metric: took 2.790329ms for node "pause-462644" to be "Ready" ...
	I0520 14:22:23.118675  655980 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 14:22:23.123170  655980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lvxbz" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:23.399049  655980 pod_ready.go:92] pod "coredns-7db6d8ff4d-lvxbz" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:23.399085  655980 pod_ready.go:81] duration metric: took 275.894569ms for pod "coredns-7db6d8ff4d-lvxbz" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:23.399096  655980 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:23.800279  655980 pod_ready.go:92] pod "etcd-pause-462644" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:23.800310  655980 pod_ready.go:81] duration metric: took 401.207937ms for pod "etcd-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:23.800322  655980 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:24.199136  655980 pod_ready.go:92] pod "kube-apiserver-pause-462644" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:24.199176  655980 pod_ready.go:81] duration metric: took 398.833905ms for pod "kube-apiserver-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:24.199191  655980 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:24.600201  655980 pod_ready.go:92] pod "kube-controller-manager-pause-462644" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:24.600229  655980 pod_ready.go:81] duration metric: took 401.028998ms for pod "kube-controller-manager-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:24.600245  655980 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sdp6h" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:24.999720  655980 pod_ready.go:92] pod "kube-proxy-sdp6h" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:24.999745  655980 pod_ready.go:81] duration metric: took 399.492964ms for pod "kube-proxy-sdp6h" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:24.999755  655980 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:25.399247  655980 pod_ready.go:92] pod "kube-scheduler-pause-462644" in "kube-system" namespace has status "Ready":"True"
	I0520 14:22:25.399279  655980 pod_ready.go:81] duration metric: took 399.516266ms for pod "kube-scheduler-pause-462644" in "kube-system" namespace to be "Ready" ...
	I0520 14:22:25.399290  655980 pod_ready.go:38] duration metric: took 2.280604318s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 14:22:25.399310  655980 api_server.go:52] waiting for apiserver process to appear ...
	I0520 14:22:25.399376  655980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 14:22:25.412638  655980 api_server.go:72] duration metric: took 2.472352557s to wait for apiserver process to appear ...
	I0520 14:22:25.412662  655980 api_server.go:88] waiting for apiserver healthz status ...
	I0520 14:22:25.412682  655980 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0520 14:22:25.416800  655980 api_server.go:279] https://192.168.50.77:8443/healthz returned 200:
	ok
	I0520 14:22:25.417912  655980 api_server.go:141] control plane version: v1.30.1
	I0520 14:22:25.417961  655980 api_server.go:131] duration metric: took 5.291014ms to wait for apiserver health ...
	I0520 14:22:25.417972  655980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 14:22:25.601856  655980 system_pods.go:59] 6 kube-system pods found
	I0520 14:22:25.601888  655980 system_pods.go:61] "coredns-7db6d8ff4d-lvxbz" [c4c57d06-48a2-4e2e-a5b3-43137c113173] Running
	I0520 14:22:25.601893  655980 system_pods.go:61] "etcd-pause-462644" [4ad49e79-7b2a-4f37-b97d-ef871b9d0b16] Running
	I0520 14:22:25.601896  655980 system_pods.go:61] "kube-apiserver-pause-462644" [b64c449a-053f-4d9a-93de-1603a8ae1cb7] Running
	I0520 14:22:25.601900  655980 system_pods.go:61] "kube-controller-manager-pause-462644" [63ece984-d10c-472b-bea7-c02aa8dcbd17] Running
	I0520 14:22:25.601902  655980 system_pods.go:61] "kube-proxy-sdp6h" [b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e] Running
	I0520 14:22:25.601905  655980 system_pods.go:61] "kube-scheduler-pause-462644" [ac70ec8d-62fe-484d-a7ad-ee7f21d938a6] Running
	I0520 14:22:25.601912  655980 system_pods.go:74] duration metric: took 183.925149ms to wait for pod list to return data ...
	I0520 14:22:25.601919  655980 default_sa.go:34] waiting for default service account to be created ...
	I0520 14:22:25.799895  655980 default_sa.go:45] found service account: "default"
	I0520 14:22:25.799944  655980 default_sa.go:55] duration metric: took 198.016228ms for default service account to be created ...
	I0520 14:22:25.799959  655980 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 14:22:26.001478  655980 system_pods.go:86] 6 kube-system pods found
	I0520 14:22:26.001510  655980 system_pods.go:89] "coredns-7db6d8ff4d-lvxbz" [c4c57d06-48a2-4e2e-a5b3-43137c113173] Running
	I0520 14:22:26.001516  655980 system_pods.go:89] "etcd-pause-462644" [4ad49e79-7b2a-4f37-b97d-ef871b9d0b16] Running
	I0520 14:22:26.001521  655980 system_pods.go:89] "kube-apiserver-pause-462644" [b64c449a-053f-4d9a-93de-1603a8ae1cb7] Running
	I0520 14:22:26.001525  655980 system_pods.go:89] "kube-controller-manager-pause-462644" [63ece984-d10c-472b-bea7-c02aa8dcbd17] Running
	I0520 14:22:26.001529  655980 system_pods.go:89] "kube-proxy-sdp6h" [b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e] Running
	I0520 14:22:26.001535  655980 system_pods.go:89] "kube-scheduler-pause-462644" [ac70ec8d-62fe-484d-a7ad-ee7f21d938a6] Running
	I0520 14:22:26.001542  655980 system_pods.go:126] duration metric: took 201.575237ms to wait for k8s-apps to be running ...
	I0520 14:22:26.001549  655980 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 14:22:26.001598  655980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 14:22:26.018317  655980 system_svc.go:56] duration metric: took 16.75747ms WaitForService to wait for kubelet
	I0520 14:22:26.018353  655980 kubeadm.go:576] duration metric: took 3.078068872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 14:22:26.018377  655980 node_conditions.go:102] verifying NodePressure condition ...
	I0520 14:22:26.201060  655980 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 14:22:26.201089  655980 node_conditions.go:123] node cpu capacity is 2
	I0520 14:22:26.201106  655980 node_conditions.go:105] duration metric: took 182.720739ms to run NodePressure ...
	I0520 14:22:26.201121  655980 start.go:240] waiting for startup goroutines ...
	I0520 14:22:26.201130  655980 start.go:245] waiting for cluster config update ...
	I0520 14:22:26.201139  655980 start.go:254] writing updated cluster config ...
	I0520 14:22:26.201446  655980 ssh_runner.go:195] Run: rm -f paused
	I0520 14:22:26.255349  655980 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 14:22:26.259714  655980 out.go:177] * Done! kubectl is now configured to use "pause-462644" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.006035572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba3bbb30-7cb7-4771-8f90-912ebbbf160c name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.006394155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a82d3960ce4a3d912baedf31aa74725e985046a35acf3657601d05a980a0dcc4,PodSandboxId:7a2a923454bf49677bb63b807f135cd97bdcaa79f5f7ff9d30b255b7d93d0e08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716214929240848674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: a45ec048,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edf8b2bea29e43cfa38c59a51087f395b1bd9b5f2b7d715d19df0e9022406880,PodSandboxId:bae92b5f79db228a1f840d39d02179a85a116c090afc92bd7a24df6026a3087e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716214929248853644,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lvxbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c57d06-48a2-4e2e-a5b3-43137c113173,},Annotations:map[string]string{io.kubernetes.container.hash: 93fa6209,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c170dd178efb899d62ae1ac9f7322b5caf5a5a374102e479aa9bcfc77974e3c8,PodSandboxId:1089b4ab133d4e4ec380bc81039e8be66177048fa9ec43363baad52e1a3d2cf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716214925378516952,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4774dba2e2ddf7ca5c0cb91bdc857d36,},Annot
ations:map[string]string{io.kubernetes.container.hash: ade9efe7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537eebcf1f781f270bc7b2a240ecc321a1489c4ab6cd8dcee2de8dccd536974d,PodSandboxId:aad62cf6617740589e54b09bc58e3f363596b8826fff65c46f77f086189ba3e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716214925393273432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37daddb58d1a1c4ab2357105ef62e629,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b43a1de818ae00812f825cceb04835cfe92f7a20552625bae0cfb20051f6259,PodSandboxId:e4e8ce98ebd6cbba973eb4684407d00f3ae0105ee1178f63ee16de39f60f01b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716214925398525002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037ddd3e650f1034e14aad84269dad2a,},Annotations:map[string]string{io.kubernet
es.container.hash: 99044ee8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1e420e7e507a5fec22eea7b5cf0b1829108f11b82d91affdee00c5b4a8bbd5,PodSandboxId:c2c3db5757ef17d16015aef562602d086a436f994ff8697cdf1b009767a0f767,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716214923553416221,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04da59a6741a964d7103cc185a3e5291,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d5098f42101b5e1aa1bf3e04549d67a79b2f9f4576b0b8300c7959485b0a45,PodSandboxId:41361e71b2d39d4740fe4cb88e3b66ae372aef929f85e191f324cf4f2442e111,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716214921009460377,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: a45ec0
48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9b29b2f9ad29ab12a236311ac9c866b337cdc4a2d02ab40551cb79498d5aae,PodSandboxId:dea2acba935211815453c2ff8bec8f400b9d141e6090c7f772d1ee9f62fe1e97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716214920965157432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lvxbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c57d06-48a2-4e2e-a5b3-43137c113173,},Annotations:map[string]string{io.kubernetes.container.hash: 93fa6209,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7dd00fffc660b262d5e304aee397db22bf4ab48951c778a11c1f238e209ac4,PodSandboxId:c303ab62fdb17789962c5d9bfe8763c29a5d1e427aa86be469ec9e41630e1e82,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716214921003540449,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037ddd3e650f1034e14aad84269dad2a,},Annotations:map[string]string{io.kubernetes.container.hash: 99044ee8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3083106faea64ffdba84f12ca33edeab1ad16d4a665d2417c40cb79a050666c9,PodSandboxId:f6b25b034d0aa241c014f43296f453432c2c3ecad12861a60e4881e0e0fda018,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716214920632493590,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-462644,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4774dba2e2ddf7ca5c0cb91bdc857d36,},Annotations:map[string]string{io.kubernetes.container.hash: ade9efe7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe6dbb008f6fc5323e4845861bb374a312f293310cc54fe983484f88333a0d1,PodSandboxId:d25cdb4534875efa5cb5721c9cc48b59680458008a03f11f39fba2fbcace29a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716214920340571341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 37daddb58d1a1c4ab2357105ef62e629,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5095f4a6930eb5613f5aff1ab099b5e94070cb42605a0f83edc2391266da3153,PodSandboxId:24b43c9edaef7d7ae37453e75fa07387f9c8897e693b5bc990493661d44bd8f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716214869841976878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 04da59a6741a964d7103cc185a3e5291,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba3bbb30-7cb7-4771-8f90-912ebbbf160c name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.031341957Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=3c119fef-bdee-46c3-a8ba-451c87c85595 name=/runtime.v1.RuntimeService/Version
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.031463896Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c119fef-bdee-46c3-a8ba-451c87c85595 name=/runtime.v1.RuntimeService/Version
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.062280225Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=205a6100-0935-41d4-a216-b407cded534c name=/runtime.v1.RuntimeService/Version
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.062388828Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=205a6100-0935-41d4-a216-b407cded534c name=/runtime.v1.RuntimeService/Version
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.064018423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8cf3da7-90b5-4d38-a1ae-90cd9941b95b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.064670059Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716214949064633957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8cf3da7-90b5-4d38-a1ae-90cd9941b95b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.065603553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3057112d-3eca-4da9-90c6-40a380d52704 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.065698834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3057112d-3eca-4da9-90c6-40a380d52704 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.066265845Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a82d3960ce4a3d912baedf31aa74725e985046a35acf3657601d05a980a0dcc4,PodSandboxId:7a2a923454bf49677bb63b807f135cd97bdcaa79f5f7ff9d30b255b7d93d0e08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716214929240848674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: a45ec048,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edf8b2bea29e43cfa38c59a51087f395b1bd9b5f2b7d715d19df0e9022406880,PodSandboxId:bae92b5f79db228a1f840d39d02179a85a116c090afc92bd7a24df6026a3087e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716214929248853644,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lvxbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c57d06-48a2-4e2e-a5b3-43137c113173,},Annotations:map[string]string{io.kubernetes.container.hash: 93fa6209,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c170dd178efb899d62ae1ac9f7322b5caf5a5a374102e479aa9bcfc77974e3c8,PodSandboxId:1089b4ab133d4e4ec380bc81039e8be66177048fa9ec43363baad52e1a3d2cf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716214925378516952,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4774dba2e2ddf7ca5c0cb91bdc857d36,},Annot
ations:map[string]string{io.kubernetes.container.hash: ade9efe7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537eebcf1f781f270bc7b2a240ecc321a1489c4ab6cd8dcee2de8dccd536974d,PodSandboxId:aad62cf6617740589e54b09bc58e3f363596b8826fff65c46f77f086189ba3e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716214925393273432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37daddb58d1a1c4ab2357105ef62e629,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b43a1de818ae00812f825cceb04835cfe92f7a20552625bae0cfb20051f6259,PodSandboxId:e4e8ce98ebd6cbba973eb4684407d00f3ae0105ee1178f63ee16de39f60f01b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716214925398525002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037ddd3e650f1034e14aad84269dad2a,},Annotations:map[string]string{io.kubernet
es.container.hash: 99044ee8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1e420e7e507a5fec22eea7b5cf0b1829108f11b82d91affdee00c5b4a8bbd5,PodSandboxId:c2c3db5757ef17d16015aef562602d086a436f994ff8697cdf1b009767a0f767,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716214923553416221,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04da59a6741a964d7103cc185a3e5291,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d5098f42101b5e1aa1bf3e04549d67a79b2f9f4576b0b8300c7959485b0a45,PodSandboxId:41361e71b2d39d4740fe4cb88e3b66ae372aef929f85e191f324cf4f2442e111,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716214921009460377,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: a45ec0
48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9b29b2f9ad29ab12a236311ac9c866b337cdc4a2d02ab40551cb79498d5aae,PodSandboxId:dea2acba935211815453c2ff8bec8f400b9d141e6090c7f772d1ee9f62fe1e97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716214920965157432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lvxbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c57d06-48a2-4e2e-a5b3-43137c113173,},Annotations:map[string]string{io.kubernetes.container.hash: 93fa6209,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7dd00fffc660b262d5e304aee397db22bf4ab48951c778a11c1f238e209ac4,PodSandboxId:c303ab62fdb17789962c5d9bfe8763c29a5d1e427aa86be469ec9e41630e1e82,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716214921003540449,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037ddd3e650f1034e14aad84269dad2a,},Annotations:map[string]string{io.kubernetes.container.hash: 99044ee8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3083106faea64ffdba84f12ca33edeab1ad16d4a665d2417c40cb79a050666c9,PodSandboxId:f6b25b034d0aa241c014f43296f453432c2c3ecad12861a60e4881e0e0fda018,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716214920632493590,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-462644,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4774dba2e2ddf7ca5c0cb91bdc857d36,},Annotations:map[string]string{io.kubernetes.container.hash: ade9efe7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe6dbb008f6fc5323e4845861bb374a312f293310cc54fe983484f88333a0d1,PodSandboxId:d25cdb4534875efa5cb5721c9cc48b59680458008a03f11f39fba2fbcace29a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716214920340571341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 37daddb58d1a1c4ab2357105ef62e629,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5095f4a6930eb5613f5aff1ab099b5e94070cb42605a0f83edc2391266da3153,PodSandboxId:24b43c9edaef7d7ae37453e75fa07387f9c8897e693b5bc990493661d44bd8f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716214869841976878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 04da59a6741a964d7103cc185a3e5291,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3057112d-3eca-4da9-90c6-40a380d52704 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.107178273Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b7abf62-dcaf-49b6-b760-4279d92adfeb name=/runtime.v1.RuntimeService/Version
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.107251502Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b7abf62-dcaf-49b6-b760-4279d92adfeb name=/runtime.v1.RuntimeService/Version
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.108372613Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4a5454d-26e7-4a12-be77-92ea7b4ea27e name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.108726573Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716214949108697388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4a5454d-26e7-4a12-be77-92ea7b4ea27e name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.109228151Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e400088-0a29-4249-b2e5-84cdb4a3d47f name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.109278457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e400088-0a29-4249-b2e5-84cdb4a3d47f name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.109674302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a82d3960ce4a3d912baedf31aa74725e985046a35acf3657601d05a980a0dcc4,PodSandboxId:7a2a923454bf49677bb63b807f135cd97bdcaa79f5f7ff9d30b255b7d93d0e08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716214929240848674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: a45ec048,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edf8b2bea29e43cfa38c59a51087f395b1bd9b5f2b7d715d19df0e9022406880,PodSandboxId:bae92b5f79db228a1f840d39d02179a85a116c090afc92bd7a24df6026a3087e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716214929248853644,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lvxbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c57d06-48a2-4e2e-a5b3-43137c113173,},Annotations:map[string]string{io.kubernetes.container.hash: 93fa6209,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c170dd178efb899d62ae1ac9f7322b5caf5a5a374102e479aa9bcfc77974e3c8,PodSandboxId:1089b4ab133d4e4ec380bc81039e8be66177048fa9ec43363baad52e1a3d2cf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716214925378516952,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4774dba2e2ddf7ca5c0cb91bdc857d36,},Annot
ations:map[string]string{io.kubernetes.container.hash: ade9efe7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537eebcf1f781f270bc7b2a240ecc321a1489c4ab6cd8dcee2de8dccd536974d,PodSandboxId:aad62cf6617740589e54b09bc58e3f363596b8826fff65c46f77f086189ba3e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716214925393273432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37daddb58d1a1c4ab2357105ef62e629,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b43a1de818ae00812f825cceb04835cfe92f7a20552625bae0cfb20051f6259,PodSandboxId:e4e8ce98ebd6cbba973eb4684407d00f3ae0105ee1178f63ee16de39f60f01b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716214925398525002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037ddd3e650f1034e14aad84269dad2a,},Annotations:map[string]string{io.kubernet
es.container.hash: 99044ee8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1e420e7e507a5fec22eea7b5cf0b1829108f11b82d91affdee00c5b4a8bbd5,PodSandboxId:c2c3db5757ef17d16015aef562602d086a436f994ff8697cdf1b009767a0f767,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716214923553416221,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04da59a6741a964d7103cc185a3e5291,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d5098f42101b5e1aa1bf3e04549d67a79b2f9f4576b0b8300c7959485b0a45,PodSandboxId:41361e71b2d39d4740fe4cb88e3b66ae372aef929f85e191f324cf4f2442e111,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716214921009460377,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: a45ec0
48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9b29b2f9ad29ab12a236311ac9c866b337cdc4a2d02ab40551cb79498d5aae,PodSandboxId:dea2acba935211815453c2ff8bec8f400b9d141e6090c7f772d1ee9f62fe1e97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716214920965157432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lvxbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c57d06-48a2-4e2e-a5b3-43137c113173,},Annotations:map[string]string{io.kubernetes.container.hash: 93fa6209,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7dd00fffc660b262d5e304aee397db22bf4ab48951c778a11c1f238e209ac4,PodSandboxId:c303ab62fdb17789962c5d9bfe8763c29a5d1e427aa86be469ec9e41630e1e82,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716214921003540449,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037ddd3e650f1034e14aad84269dad2a,},Annotations:map[string]string{io.kubernetes.container.hash: 99044ee8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3083106faea64ffdba84f12ca33edeab1ad16d4a665d2417c40cb79a050666c9,PodSandboxId:f6b25b034d0aa241c014f43296f453432c2c3ecad12861a60e4881e0e0fda018,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716214920632493590,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-462644,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4774dba2e2ddf7ca5c0cb91bdc857d36,},Annotations:map[string]string{io.kubernetes.container.hash: ade9efe7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe6dbb008f6fc5323e4845861bb374a312f293310cc54fe983484f88333a0d1,PodSandboxId:d25cdb4534875efa5cb5721c9cc48b59680458008a03f11f39fba2fbcace29a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716214920340571341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 37daddb58d1a1c4ab2357105ef62e629,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5095f4a6930eb5613f5aff1ab099b5e94070cb42605a0f83edc2391266da3153,PodSandboxId:24b43c9edaef7d7ae37453e75fa07387f9c8897e693b5bc990493661d44bd8f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716214869841976878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 04da59a6741a964d7103cc185a3e5291,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e400088-0a29-4249-b2e5-84cdb4a3d47f name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.153780470Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8cdb2fb3-9118-4038-8656-5d65338b8c4c name=/runtime.v1.RuntimeService/Version
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.153854525Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8cdb2fb3-9118-4038-8656-5d65338b8c4c name=/runtime.v1.RuntimeService/Version
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.155084821Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2e21d8be-c0eb-4ae6-a613-26e972356bb6 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.155654501Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716214949155617072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e21d8be-c0eb-4ae6-a613-26e972356bb6 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.156309753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d50f128f-b65d-40b1-bb31-03bd5b7b2674 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.156366785Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d50f128f-b65d-40b1-bb31-03bd5b7b2674 name=/runtime.v1.RuntimeService/ListContainers
	May 20 14:22:29 pause-462644 crio[2955]: time="2024-05-20 14:22:29.156598844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a82d3960ce4a3d912baedf31aa74725e985046a35acf3657601d05a980a0dcc4,PodSandboxId:7a2a923454bf49677bb63b807f135cd97bdcaa79f5f7ff9d30b255b7d93d0e08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716214929240848674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: a45ec048,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edf8b2bea29e43cfa38c59a51087f395b1bd9b5f2b7d715d19df0e9022406880,PodSandboxId:bae92b5f79db228a1f840d39d02179a85a116c090afc92bd7a24df6026a3087e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716214929248853644,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lvxbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c57d06-48a2-4e2e-a5b3-43137c113173,},Annotations:map[string]string{io.kubernetes.container.hash: 93fa6209,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c170dd178efb899d62ae1ac9f7322b5caf5a5a374102e479aa9bcfc77974e3c8,PodSandboxId:1089b4ab133d4e4ec380bc81039e8be66177048fa9ec43363baad52e1a3d2cf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716214925378516952,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4774dba2e2ddf7ca5c0cb91bdc857d36,},Annot
ations:map[string]string{io.kubernetes.container.hash: ade9efe7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537eebcf1f781f270bc7b2a240ecc321a1489c4ab6cd8dcee2de8dccd536974d,PodSandboxId:aad62cf6617740589e54b09bc58e3f363596b8826fff65c46f77f086189ba3e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716214925393273432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37daddb58d1a1c4ab2357105ef62e629,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b43a1de818ae00812f825cceb04835cfe92f7a20552625bae0cfb20051f6259,PodSandboxId:e4e8ce98ebd6cbba973eb4684407d00f3ae0105ee1178f63ee16de39f60f01b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716214925398525002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037ddd3e650f1034e14aad84269dad2a,},Annotations:map[string]string{io.kubernet
es.container.hash: 99044ee8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1e420e7e507a5fec22eea7b5cf0b1829108f11b82d91affdee00c5b4a8bbd5,PodSandboxId:c2c3db5757ef17d16015aef562602d086a436f994ff8697cdf1b009767a0f767,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716214923553416221,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04da59a6741a964d7103cc185a3e5291,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d5098f42101b5e1aa1bf3e04549d67a79b2f9f4576b0b8300c7959485b0a45,PodSandboxId:41361e71b2d39d4740fe4cb88e3b66ae372aef929f85e191f324cf4f2442e111,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716214921009460377,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdp6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e,},Annotations:map[string]string{io.kubernetes.container.hash: a45ec0
48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9b29b2f9ad29ab12a236311ac9c866b337cdc4a2d02ab40551cb79498d5aae,PodSandboxId:dea2acba935211815453c2ff8bec8f400b9d141e6090c7f772d1ee9f62fe1e97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716214920965157432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lvxbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c57d06-48a2-4e2e-a5b3-43137c113173,},Annotations:map[string]string{io.kubernetes.container.hash: 93fa6209,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7dd00fffc660b262d5e304aee397db22bf4ab48951c778a11c1f238e209ac4,PodSandboxId:c303ab62fdb17789962c5d9bfe8763c29a5d1e427aa86be469ec9e41630e1e82,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716214921003540449,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037ddd3e650f1034e14aad84269dad2a,},Annotations:map[string]string{io.kubernetes.container.hash: 99044ee8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3083106faea64ffdba84f12ca33edeab1ad16d4a665d2417c40cb79a050666c9,PodSandboxId:f6b25b034d0aa241c014f43296f453432c2c3ecad12861a60e4881e0e0fda018,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716214920632493590,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-462644,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4774dba2e2ddf7ca5c0cb91bdc857d36,},Annotations:map[string]string{io.kubernetes.container.hash: ade9efe7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe6dbb008f6fc5323e4845861bb374a312f293310cc54fe983484f88333a0d1,PodSandboxId:d25cdb4534875efa5cb5721c9cc48b59680458008a03f11f39fba2fbcace29a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716214920340571341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 37daddb58d1a1c4ab2357105ef62e629,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5095f4a6930eb5613f5aff1ab099b5e94070cb42605a0f83edc2391266da3153,PodSandboxId:24b43c9edaef7d7ae37453e75fa07387f9c8897e693b5bc990493661d44bd8f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716214869841976878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-462644,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 04da59a6741a964d7103cc185a3e5291,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d50f128f-b65d-40b1-bb31-03bd5b7b2674 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	edf8b2bea29e4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago       Running             coredns                   2                   bae92b5f79db2       coredns-7db6d8ff4d-lvxbz
	a82d3960ce4a3       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   19 seconds ago       Running             kube-proxy                2                   7a2a923454bf4       kube-proxy-sdp6h
	2b43a1de818ae       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   23 seconds ago       Running             kube-apiserver            2                   e4e8ce98ebd6c       kube-apiserver-pause-462644
	537eebcf1f781       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   23 seconds ago       Running             kube-scheduler            2                   aad62cf661774       kube-scheduler-pause-462644
	c170dd178efb8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago       Running             etcd                      2                   1089b4ab133d4       etcd-pause-462644
	0e1e420e7e507       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   25 seconds ago       Running             kube-controller-manager   1                   c2c3db5757ef1       kube-controller-manager-pause-462644
	c3d5098f42101       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   28 seconds ago       Exited              kube-proxy                1                   41361e71b2d39       kube-proxy-sdp6h
	3f7dd00fffc66       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   28 seconds ago       Exited              kube-apiserver            1                   c303ab62fdb17       kube-apiserver-pause-462644
	4a9b29b2f9ad2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   28 seconds ago       Exited              coredns                   1                   dea2acba93521       coredns-7db6d8ff4d-lvxbz
	3083106faea64       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   28 seconds ago       Exited              etcd                      1                   f6b25b034d0aa       etcd-pause-462644
	abe6dbb008f6f       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   28 seconds ago       Exited              kube-scheduler            1                   d25cdb4534875       kube-scheduler-pause-462644
	5095f4a6930eb       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   About a minute ago   Exited              kube-controller-manager   0                   24b43c9edaef7       kube-controller-manager-pause-462644
	
	
	==> coredns [4a9b29b2f9ad29ab12a236311ac9c866b337cdc4a2d02ab40551cb79498d5aae] <==
	
	
	==> coredns [edf8b2bea29e43cfa38c59a51087f395b1bd9b5f2b7d715d19df0e9022406880] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54861 - 20363 "HINFO IN 4134098353068611736.7386086843332236561. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012697784s
	
	
	==> describe nodes <==
	Name:               pause-462644
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-462644
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=pause-462644
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T14_21_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 14:21:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-462644
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 14:22:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 14:22:08 +0000   Mon, 20 May 2024 14:21:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 14:22:08 +0000   Mon, 20 May 2024 14:21:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 14:22:08 +0000   Mon, 20 May 2024 14:21:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 14:22:08 +0000   Mon, 20 May 2024 14:21:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.77
	  Hostname:    pause-462644
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 205e3578e56e4223ac654bc599f0d9bc
	  System UUID:                205e3578-e56e-4223-ac65-4bc599f0d9bc
	  Boot ID:                    27f3c900-c699-4db8-a378-e794d94875d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-lvxbz                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     61s
	  kube-system                 etcd-pause-462644                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         74s
	  kube-system                 kube-apiserver-pause-462644             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-pause-462644    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-proxy-sdp6h                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-scheduler-pause-462644             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 59s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 80s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  80s (x8 over 80s)  kubelet          Node pause-462644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s (x8 over 80s)  kubelet          Node pause-462644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s (x7 over 80s)  kubelet          Node pause-462644 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  80s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    74s                kubelet          Node pause-462644 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  74s                kubelet          Node pause-462644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     74s                kubelet          Node pause-462644 status is now: NodeHasSufficientPID
	  Normal  Starting                 74s                kubelet          Starting kubelet.
	  Normal  NodeReady                73s                kubelet          Node pause-462644 status is now: NodeReady
	  Normal  RegisteredNode           62s                node-controller  Node pause-462644 event: Registered Node pause-462644 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-462644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-462644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-462644 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                 node-controller  Node pause-462644 event: Registered Node pause-462644 in Controller
	
	
	==> dmesg <==
	[  +0.070540] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061774] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.196407] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.132521] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[May20 14:21] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.264072] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.067329] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.059564] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.969379] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.082606] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.088441] kauditd_printk_skb: 15 callbacks suppressed
	[ +13.817736] systemd-fstab-generator[1498]: Ignoring "noauto" option for root device
	[  +0.155552] kauditd_printk_skb: 21 callbacks suppressed
	[ +10.206014] kauditd_printk_skb: 88 callbacks suppressed
	[ +20.235104] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.500829] systemd-fstab-generator[2444]: Ignoring "noauto" option for root device
	[  +0.214241] systemd-fstab-generator[2525]: Ignoring "noauto" option for root device
	[May20 14:22] systemd-fstab-generator[2664]: Ignoring "noauto" option for root device
	[  +0.187174] systemd-fstab-generator[2717]: Ignoring "noauto" option for root device
	[  +0.322587] systemd-fstab-generator[2768]: Ignoring "noauto" option for root device
	[  +1.305728] systemd-fstab-generator[3131]: Ignoring "noauto" option for root device
	[  +2.445494] systemd-fstab-generator[3586]: Ignoring "noauto" option for root device
	[  +0.105237] kauditd_printk_skb: 243 callbacks suppressed
	[ +16.346619] kauditd_printk_skb: 45 callbacks suppressed
	[  +1.875117] systemd-fstab-generator[4006]: Ignoring "noauto" option for root device
	
	
	==> etcd [3083106faea64ffdba84f12ca33edeab1ad16d4a665d2417c40cb79a050666c9] <==
	
	
	==> etcd [c170dd178efb899d62ae1ac9f7322b5caf5a5a374102e479aa9bcfc77974e3c8] <==
	{"level":"info","ts":"2024-05-20T14:22:05.808998Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T14:22:05.809037Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T14:22:05.813285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d05dba3721239311 switched to configuration voters=(15014361478665048849)"}
	{"level":"info","ts":"2024-05-20T14:22:05.813517Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c5fd294bb66e7a29","local-member-id":"d05dba3721239311","added-peer-id":"d05dba3721239311","added-peer-peer-urls":["https://192.168.50.77:2380"]}
	{"level":"info","ts":"2024-05-20T14:22:05.813753Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c5fd294bb66e7a29","local-member-id":"d05dba3721239311","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T14:22:05.81385Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T14:22:05.824273Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T14:22:05.824533Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d05dba3721239311","initial-advertise-peer-urls":["https://192.168.50.77:2380"],"listen-peer-urls":["https://192.168.50.77:2380"],"advertise-client-urls":["https://192.168.50.77:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.77:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T14:22:05.824579Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T14:22:05.824734Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.77:2380"}
	{"level":"info","ts":"2024-05-20T14:22:05.824755Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.77:2380"}
	{"level":"info","ts":"2024-05-20T14:22:06.86815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d05dba3721239311 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T14:22:06.868273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d05dba3721239311 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T14:22:06.868332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d05dba3721239311 received MsgPreVoteResp from d05dba3721239311 at term 2"}
	{"level":"info","ts":"2024-05-20T14:22:06.868366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d05dba3721239311 became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T14:22:06.86839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d05dba3721239311 received MsgVoteResp from d05dba3721239311 at term 3"}
	{"level":"info","ts":"2024-05-20T14:22:06.868417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d05dba3721239311 became leader at term 3"}
	{"level":"info","ts":"2024-05-20T14:22:06.868443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d05dba3721239311 elected leader d05dba3721239311 at term 3"}
	{"level":"info","ts":"2024-05-20T14:22:06.876281Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d05dba3721239311","local-member-attributes":"{Name:pause-462644 ClientURLs:[https://192.168.50.77:2379]}","request-path":"/0/members/d05dba3721239311/attributes","cluster-id":"c5fd294bb66e7a29","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T14:22:06.876393Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T14:22:06.879122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.77:2379"}
	{"level":"info","ts":"2024-05-20T14:22:06.8793Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T14:22:06.880033Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T14:22:06.880069Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T14:22:06.882885Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:22:29 up 1 min,  0 users,  load average: 1.80, 0.67, 0.24
	Linux pause-462644 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2b43a1de818ae00812f825cceb04835cfe92f7a20552625bae0cfb20051f6259] <==
	I0520 14:22:08.474535       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 14:22:08.475424       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 14:22:08.475722       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 14:22:08.476576       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 14:22:08.476677       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 14:22:08.478481       1 aggregator.go:165] initial CRD sync complete...
	I0520 14:22:08.478555       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 14:22:08.478579       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 14:22:08.478610       1 cache.go:39] Caches are synced for autoregister controller
	I0520 14:22:08.478794       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 14:22:08.478832       1 policy_source.go:224] refreshing policies
	I0520 14:22:08.485378       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 14:22:08.485786       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 14:22:08.497303       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 14:22:08.497454       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0520 14:22:08.498904       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0520 14:22:08.517087       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 14:22:09.390081       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 14:22:10.195278       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 14:22:10.208255       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 14:22:10.256626       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 14:22:10.299204       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 14:22:10.308692       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 14:22:21.107150       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 14:22:21.129898       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [3f7dd00fffc660b262d5e304aee397db22bf4ab48951c778a11c1f238e209ac4] <==
	
	
	==> kube-controller-manager [0e1e420e7e507a5fec22eea7b5cf0b1829108f11b82d91affdee00c5b4a8bbd5] <==
	I0520 14:22:21.101704       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0520 14:22:21.106251       1 shared_informer.go:320] Caches are synced for job
	I0520 14:22:21.106306       1 shared_informer.go:320] Caches are synced for TTL
	I0520 14:22:21.106270       1 shared_informer.go:320] Caches are synced for cronjob
	I0520 14:22:21.106294       1 shared_informer.go:320] Caches are synced for GC
	I0520 14:22:21.108821       1 shared_informer.go:320] Caches are synced for taint
	I0520 14:22:21.108951       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0520 14:22:21.109050       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-462644"
	I0520 14:22:21.109126       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0520 14:22:21.111558       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0520 14:22:21.115312       1 shared_informer.go:320] Caches are synced for PV protection
	I0520 14:22:21.117685       1 shared_informer.go:320] Caches are synced for endpoint
	I0520 14:22:21.119607       1 shared_informer.go:320] Caches are synced for deployment
	I0520 14:22:21.123110       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0520 14:22:21.127049       1 shared_informer.go:320] Caches are synced for crt configmap
	I0520 14:22:21.129612       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0520 14:22:21.225070       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0520 14:22:21.237179       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 14:22:21.244500       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 14:22:21.271265       1 shared_informer.go:320] Caches are synced for disruption
	I0520 14:22:21.279539       1 shared_informer.go:320] Caches are synced for attach detach
	I0520 14:22:21.292544       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0520 14:22:21.762215       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 14:22:21.785826       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 14:22:21.785860       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [5095f4a6930eb5613f5aff1ab099b5e94070cb42605a0f83edc2391266da3153] <==
	I0520 14:21:28.010087       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 14:21:28.028034       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 14:21:28.034848       1 shared_informer.go:320] Caches are synced for stateful set
	I0520 14:21:28.082321       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0520 14:21:28.091473       1 shared_informer.go:320] Caches are synced for attach detach
	I0520 14:21:28.564360       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 14:21:28.564456       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 14:21:28.588624       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 14:21:28.840374       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="338.051894ms"
	I0520 14:21:28.866608       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.183209ms"
	I0520 14:21:28.866788       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.519µs"
	I0520 14:21:28.878893       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.262µs"
	I0520 14:21:28.891512       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.866µs"
	I0520 14:21:28.917192       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="146.664µs"
	I0520 14:21:29.622488       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.248663ms"
	I0520 14:21:29.651428       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.746705ms"
	I0520 14:21:29.651568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.177µs"
	I0520 14:21:30.339540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.28µs"
	I0520 14:21:30.378715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="69.257µs"
	I0520 14:21:39.151854       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.038792ms"
	I0520 14:21:39.153268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="126.719µs"
	I0520 14:21:40.278126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.946µs"
	I0520 14:21:40.325732       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="176.907µs"
	I0520 14:21:40.626262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.286µs"
	I0520 14:21:40.631379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.108µs"
	
	
	==> kube-proxy [a82d3960ce4a3d912baedf31aa74725e985046a35acf3657601d05a980a0dcc4] <==
	I0520 14:22:09.433490       1 server_linux.go:69] "Using iptables proxy"
	I0520 14:22:09.450788       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.77"]
	I0520 14:22:09.513079       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 14:22:09.513129       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 14:22:09.513160       1 server_linux.go:165] "Using iptables Proxier"
	I0520 14:22:09.517129       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 14:22:09.517362       1 server.go:872] "Version info" version="v1.30.1"
	I0520 14:22:09.517498       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 14:22:09.518946       1 config.go:192] "Starting service config controller"
	I0520 14:22:09.519016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 14:22:09.519097       1 config.go:101] "Starting endpoint slice config controller"
	I0520 14:22:09.519122       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 14:22:09.519570       1 config.go:319] "Starting node config controller"
	I0520 14:22:09.522101       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 14:22:09.621002       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 14:22:09.621113       1 shared_informer.go:320] Caches are synced for service config
	I0520 14:22:09.622561       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c3d5098f42101b5e1aa1bf3e04549d67a79b2f9f4576b0b8300c7959485b0a45] <==
	
	
	==> kube-scheduler [537eebcf1f781f270bc7b2a240ecc321a1489c4ab6cd8dcee2de8dccd536974d] <==
	I0520 14:22:06.607705       1 serving.go:380] Generated self-signed cert in-memory
	W0520 14:22:08.450852       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 14:22:08.450960       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 14:22:08.450972       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 14:22:08.450982       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 14:22:08.467270       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 14:22:08.467315       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 14:22:08.469735       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 14:22:08.470015       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 14:22:08.470104       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 14:22:08.470063       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0520 14:22:08.571107       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [abe6dbb008f6fc5323e4845861bb374a312f293310cc54fe983484f88333a0d1] <==
	
	
	==> kubelet <==
	May 20 14:22:05 pause-462644 kubelet[3593]: I0520 14:22:05.363237    3593 scope.go:117] "RemoveContainer" containerID="3f7dd00fffc660b262d5e304aee397db22bf4ab48951c778a11c1f238e209ac4"
	May 20 14:22:05 pause-462644 kubelet[3593]: I0520 14:22:05.365282    3593 scope.go:117] "RemoveContainer" containerID="5095f4a6930eb5613f5aff1ab099b5e94070cb42605a0f83edc2391266da3153"
	May 20 14:22:05 pause-462644 kubelet[3593]: I0520 14:22:05.367247    3593 scope.go:117] "RemoveContainer" containerID="abe6dbb008f6fc5323e4845861bb374a312f293310cc54fe983484f88333a0d1"
	May 20 14:22:05 pause-462644 kubelet[3593]: E0520 14:22:05.532014    3593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-462644?timeout=10s\": dial tcp 192.168.50.77:8443: connect: connection refused" interval="800ms"
	May 20 14:22:05 pause-462644 kubelet[3593]: I0520 14:22:05.636715    3593 kubelet_node_status.go:73] "Attempting to register node" node="pause-462644"
	May 20 14:22:05 pause-462644 kubelet[3593]: E0520 14:22:05.637742    3593 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.77:8443: connect: connection refused" node="pause-462644"
	May 20 14:22:05 pause-462644 kubelet[3593]: W0520 14:22:05.725975    3593 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.77:8443: connect: connection refused
	May 20 14:22:05 pause-462644 kubelet[3593]: E0520 14:22:05.726069    3593 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.77:8443: connect: connection refused
	May 20 14:22:05 pause-462644 kubelet[3593]: W0520 14:22:05.727877    3593 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-462644&limit=500&resourceVersion=0": dial tcp 192.168.50.77:8443: connect: connection refused
	May 20 14:22:05 pause-462644 kubelet[3593]: E0520 14:22:05.728004    3593 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-462644&limit=500&resourceVersion=0": dial tcp 192.168.50.77:8443: connect: connection refused
	May 20 14:22:06 pause-462644 kubelet[3593]: I0520 14:22:06.439653    3593 kubelet_node_status.go:73] "Attempting to register node" node="pause-462644"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.536154    3593 kubelet_node_status.go:112] "Node was previously registered" node="pause-462644"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.536708    3593 kubelet_node_status.go:76] "Successfully registered node" node="pause-462644"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.538317    3593 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.540116    3593 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 20 14:22:08 pause-462644 kubelet[3593]: E0520 14:22:08.557510    3593 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-462644\" not found"
	May 20 14:22:08 pause-462644 kubelet[3593]: E0520 14:22:08.658221    3593 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-462644\" not found"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.918100    3593 apiserver.go:52] "Watching apiserver"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.921456    3593 topology_manager.go:215] "Topology Admit Handler" podUID="b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e" podNamespace="kube-system" podName="kube-proxy-sdp6h"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.921897    3593 topology_manager.go:215] "Topology Admit Handler" podUID="c4c57d06-48a2-4e2e-a5b3-43137c113173" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lvxbz"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.950447    3593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e-xtables-lock\") pod \"kube-proxy-sdp6h\" (UID: \"b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e\") " pod="kube-system/kube-proxy-sdp6h"
	May 20 14:22:08 pause-462644 kubelet[3593]: I0520 14:22:08.951787    3593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e-lib-modules\") pod \"kube-proxy-sdp6h\" (UID: \"b77bec0b-fb56-4e8a-ad2a-bc9cf48ac50e\") " pod="kube-system/kube-proxy-sdp6h"
	May 20 14:22:09 pause-462644 kubelet[3593]: I0520 14:22:09.024254    3593 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 20 14:22:09 pause-462644 kubelet[3593]: I0520 14:22:09.223168    3593 scope.go:117] "RemoveContainer" containerID="c3d5098f42101b5e1aa1bf3e04549d67a79b2f9f4576b0b8300c7959485b0a45"
	May 20 14:22:09 pause-462644 kubelet[3593]: I0520 14:22:09.223885    3593 scope.go:117] "RemoveContainer" containerID="4a9b29b2f9ad29ab12a236311ac9c866b337cdc4a2d02ab40551cb79498d5aae"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-462644 -n pause-462644
helpers_test.go:261: (dbg) Run:  kubectl --context pause-462644 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (48.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (7200.056s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
E0520 14:53:01.807904  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
E0520 14:53:26.907169  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/kindnet-862860/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.195:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.195:8443: connect: connection refused
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (33m58s)
	TestNetworkPlugins/group (27m30s)
	TestStartStop (33m44s)
	TestStartStop/group/default-k8s-diff-port (27m30s)
	TestStartStop/group/default-k8s-diff-port/serial (27m30s)
	TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (4m22s)
	TestStartStop/group/embed-certs (27m46s)
	TestStartStop/group/embed-certs/serial (27m46s)
	TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5m23s)
	TestStartStop/group/no-preload (28m27s)
	TestStartStop/group/no-preload/serial (28m27s)
	TestStartStop/group/no-preload/serial/AddonExistsAfterStop (4m11s)
	TestStartStop/group/old-k8s-version (28m42s)
	TestStartStop/group/old-k8s-version/serial (28m42s)
	TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (1m12s)

                                                
                                                
goroutine 7717 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 31 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000112680, 0xc00118fbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0006ca558, {0x4941b20, 0x2b, 0x2b}, {0x26a2ff3?, 0xc000a04900?, 0x49fe280?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0006b8b40)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0006b8b40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000717c80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 3227 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001bbe700, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3217
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 613 [select, 110 minutes]:
net/http.(*persistConn).writeLoop(0xc001257b00)
	/usr/local/go/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 628
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 3783 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001a7b010, 0x5)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001c1c780)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001a7b040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001d7f1e0, {0x3612ca0, 0xc001413da0}, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001d7f1e0, 0x3b9aca00, 0x0, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3757
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2828 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2827
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 30 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 29
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 982 [chan send, 88 minutes]:
os/exec.(*Cmd).watchCtx(0xc0018e2840, 0xc0018e41e0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 850
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1026 [chan send, 88 minutes]:
os/exec.(*Cmd).watchCtx(0xc001ad06e0, 0xc001545740)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 977
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2791 [chan receive, 28 minutes]:
testing.(*T).Run(0xc001bfa340, {0x264a09d?, 0x0?}, 0xc0001c5b80)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001bfa340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001bfa340, 0xc001bbe300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2737
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2212 [chan receive, 34 minutes]:
testing.(*T).Run(0xc001c9e000, {0x2648b06?, 0x55149c?}, 0xc001700240)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001c9e000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc001c9e000, 0x30b9808)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2400 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc00069ba90)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc0012a6000, 0xc001700240)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2212
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3785 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3784
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 6468 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x3636930, 0xc000886a00}, {0x362a040, 0xc001915800}, 0x1, 0x0, 0xc00118db40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36369a0?, 0xc000700230?}, 0x3b9aca00, 0xc00118dd38?, 0x1, 0xc00118db40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36369a0, 0xc000700230}, 0xc001c9fba0, {0xc0013efd58, 0x11}, {0x266e8f0, 0x14}, {0x2686397, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36369a0, 0xc000700230}, 0xc001c9fba0, {0xc0013efd58, 0x11}, {0x2653c28?, 0xc001296760?}, {0x551353?, 0x4a16cf?}, {0xc0001cdb00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x13b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001c9fba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001c9fba0, 0xc001e50100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3478
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 147 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00070d390, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00114ee40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00070d400)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006e5c30, {0x3612ca0, 0xc0006694a0}, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006e5c30, 0x3b9aca00, 0x0, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 174
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3484 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc00052a000}, 0xc0016cf750, 0xc0016cf798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc00052a000}, 0x32?, 0xc0016cf750, 0xc0016cf798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc00052a000?}, 0x466567616d492620?, 0x7165526f666e4973?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x726f747065637265?, 0x2232363a6f672e73?, 0x356134653d646920?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3462
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3733 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3732
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3485 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3484
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 840 [chan send, 88 minutes]:
os/exec.(*Cmd).watchCtx(0xc0006eac60, 0xc0012d6060)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 839
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 173 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00114ef60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 163
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 174 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00070d400, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 163
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 148 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc00052a000}, 0xc000505750, 0xc001250f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc00052a000}, 0xd?, 0xc000505750, 0xc000505798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc00052a000?}, 0xc000112b60?, 0x551c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0005057d0?, 0x593064?, 0xc0005af080?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 174
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 149 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 148
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3546 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc00052a000}, 0xc0016cff50, 0xc001165f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc00052a000}, 0x0?, 0xc0016cff50, 0xc0016cff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc00052a000?}, 0xc0016cffb0?, 0x99cff8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0016cffd0?, 0x9aba85?, 0xc000002780?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3516
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 741 [IO wait, 108 minutes]:
internal/poll.runtime_pollWait(0x7fba81f93440, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x11?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0017aa100)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0017aa100)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000704860)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000704860)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0006520f0, {0x3629980, 0xc000704860})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0006520f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0006d5380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 738
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 4555 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0017616e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 4554
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 893 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc00052a000}, 0xc0012edf50, 0xc0000acf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc00052a000}, 0x0?, 0xc0012edf50, 0xc0012edf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc00052a000?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012edfd0?, 0x593064?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 903
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2981 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001bbe910, 0x17)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001bbb080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001bbe940)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00110efc0, {0x3612ca0, 0xc001ad4330}, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00110efc0, 0x3b9aca00, 0x0, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2978
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 4032 [IO wait]:
internal/poll.runtime_pollWait(0x7fba8026aee8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001200600?, 0xc001489000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001200600, {0xc001489000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc001200600, {0xc001489000?, 0xc00045e780?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000902c18, {0xc001489000?, 0xc00148905f?, 0x70?})
	/usr/local/go/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc001be8e40, {0xc001489000?, 0x0?, 0xc001be8e40?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0013a97b0, {0x3613440, 0xc001be8e40})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0013a9508, {0x7fba80324058, 0xc001ded590}, 0xc000a62980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0013a9508, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc0013a9508, {0xc001321000, 0x1000, 0xc001a86540?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc00186c3c0, {0xc00114d1c0, 0x9, 0x48fdc00?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3611920, 0xc00186c3c0}, {0xc00114d1c0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc00114d1c0, 0x9, 0xa62dc0?}, {0x3611920?, 0xc00186c3c0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00114d180)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000a62fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:2442 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000209500)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:2338 +0x65
created by golang.org/x/net/http2.(*ClientConn).goRun in goroutine 4031
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:369 +0x2d

                                                
                                                
goroutine 3515 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001be57a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3525
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2983 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2982
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3285 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3284
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3239 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc00052a000}, 0xc0016d5750, 0xc0016d5798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc00052a000}, 0x61?, 0xc0016d5750, 0xc0016d5798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc00052a000?}, 0x3a7963696c6f5065?, 0x6f692c656c694620?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x203a646f69726550?, 0x262c7d2c7d2c3033?, 0x656e6961746e6f43?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3227
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2737 [chan receive, 34 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0012a7860, 0x30b9a28)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2250
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3461 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001bbb020)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3460
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 902 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0011fc4e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 857
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 4556 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001c62740, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4554
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2961 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001bbb1a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2960
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1146 [select, 88 minutes]:
net/http.(*persistConn).readLoop(0xc001de45a0)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1188
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 903 [chan receive, 88 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0002172c0, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 857
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3711 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0017b7b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3716
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 555 [select, 110 minutes]:
net/http.(*persistConn).writeLoop(0xc0013d30e0)
	/usr/local/go/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 614
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 894 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 893
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3731 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001bbe510, 0x5)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0017b79e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001bbe540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00063d580, {0x3612ca0, 0xc0013e1ec0}, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00063d580, 0x3b9aca00, 0x0, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3712
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 554 [select, 110 minutes]:
net/http.(*persistConn).readLoop(0xc0013d30e0)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 614
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 612 [select, 110 minutes]:
net/http.(*persistConn).readLoop(0xc001257b00)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 628
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 4594 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4577
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2982 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc00052a000}, 0xc001297f50, 0xc001167f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc00052a000}, 0x80?, 0xc001297f50, 0xc001297f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc00052a000?}, 0xc001297fb0?, 0x99cff8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593005?, 0xc00085a000?, 0xc001544180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2978
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3123 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0002170d0, 0x16)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001c87620)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000217100)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008e6350, {0x3612ca0, 0xc0014801b0}, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008e6350, 0x3b9aca00, 0x0, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3113
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3756 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001c1c900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3745
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 892 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000217290, 0x25)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0011fc3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0002172c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001244220, {0x3612ca0, 0xc001ca1380}, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001244220, 0x3b9aca00, 0x0, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 903
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 5941 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x3636930, 0xc001901c70}, {0x362a040, 0xc0007bfe60}, 0x1, 0x0, 0xc00120bb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36369a0?, 0xc0005a61c0?}, 0x3b9aca00, 0xc00118dd38?, 0x1, 0xc00118db40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36369a0, 0xc0005a61c0}, 0xc001c9f860, {0xc0019b6030, 0x12}, {0x266e8f0, 0x14}, {0x2686397, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36369a0, 0xc0005a61c0}, 0xc001c9f860, {0xc0019b6030, 0x12}, {0x2655e04?, 0xc001213f60?}, {0x551353?, 0x4a16cf?}, {0xc0011f0f00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x13b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001c9f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001c9f860, 0xc001e50080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3634
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3124 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc00052a000}, 0xc000099f50, 0xc00124ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc00052a000}, 0xa0?, 0xc000099f50, 0xc000099f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc00052a000?}, 0xc001c9e680?, 0x551c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000099fd0?, 0x593064?, 0xc0018e51a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3113
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3516 [chan receive, 28 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008e47c0, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3525
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 7056 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x3636930, 0xc00193f3b0}, {0x362a040, 0xc0007bebc0}, 0x1, 0x0, 0xc001189b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36369a0?, 0xc000700000?}, 0x3b9aca00, 0xc001189d38?, 0x1, 0xc001189b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36369a0, 0xc000700000}, 0xc001c9e820, {0xc0013ef5a8, 0x16}, {0x266e8f0, 0x14}, {0x2686397, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36369a0, 0xc000700000}, 0xc001c9e820, {0xc0013ef5a8, 0x16}, {0x265fd50?, 0xc00120ff60?}, {0x551353?, 0x4a16cf?}, {0xc000209380, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x13b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001c9e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001c9e820, 0xc001e50100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3368
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1147 [select, 88 minutes]:
net/http.(*persistConn).writeLoop(0xc001de45a0)
	/usr/local/go/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1188
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 3634 [chan receive, 5 minutes]:
testing.(*T).Run(0xc00128d380, {0x266e954?, 0x60400000004?}, 0xc001e50080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00128d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00128d380, 0xc0001c5b80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2791
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2786 [chan receive, 28 minutes]:
testing.(*T).Run(0xc0012a7a00, {0x264a09d?, 0x0?}, 0xc001bfe080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0012a7a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0012a7a00, 0xc001bbe180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2737
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3483 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000216b90, 0x16)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001bbaf00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000216c40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b06e10, {0x3612ca0, 0xc0014133b0}, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b06e10, 0x3b9aca00, 0x0, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3462
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3280 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008e5c40, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3301
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2827 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc00052a000}, 0xc0013ddf50, 0xc0013ddf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc00052a000}, 0x60?, 0xc0013ddf50, 0xc0013ddf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc00052a000?}, 0xc001a345a0?, 0xc001bfe200?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593005?, 0xc001bac9a0?, 0xc001baac60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2814
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3547 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3546
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2789 [chan receive, 28 minutes]:
testing.(*T).Run(0xc001bfa000, {0x264a09d?, 0x0?}, 0xc001bfe500)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001bfa000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001bfa000, 0xc001bbe240)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2737
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2788 [chan receive, 27 minutes]:
testing.(*T).Run(0xc0012a7d40, {0x264a09d?, 0x0?}, 0xc00050c080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0012a7d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0012a7d40, 0xc001bbe200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2737
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2250 [chan receive, 34 minutes]:
testing.(*T).Run(0xc0012a6340, {0x2648b06?, 0x551353?}, 0x30b9a28)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0012a6340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0012a6340, 0x30b9850)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3368 [chan receive]:
testing.(*T).Run(0xc001c9f380, {0x266e954?, 0x60400000004?}, 0xc001e50100)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001c9f380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001c9f380, 0xc001bfe080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2786
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3659 [chan receive, 4 minutes]:
testing.(*T).Run(0xc001bfa4e0, {0x266e954?, 0x60400000004?}, 0xc00050c380)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001bfa4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001bfa4e0, 0xc00050c080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2788
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3279 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00114f8c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3301
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3462 [chan receive, 28 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000216c40, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3460
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3712 [chan receive, 27 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001bbe540, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3716
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3478 [chan receive, 4 minutes]:
testing.(*T).Run(0xc001c9f6c0, {0x266e954?, 0x60400000004?}, 0xc001e50100)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001c9f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001c9f6c0, 0xc001bfe500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2789
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2978 [chan receive, 31 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001bbe940, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2960
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2813 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001163620)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2838
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3113 [chan receive, 30 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000217100, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3111
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3112 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001c87740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3111
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2787 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00069ba90)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0012a7ba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0012a7ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0012a7ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0012a7ba0, 0xc001bbe1c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2737
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3283 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0008e5c10, 0x16)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00114f7a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008e5c40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006e5520, {0x3612ca0, 0xc0008eb3b0}, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006e5520, 0x3b9aca00, 0x0, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3280
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3757 [chan receive, 27 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001a7b040, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3745
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 6387 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x3636930, 0xc000886870}, {0x362a040, 0xc000884480}, 0x1, 0x0, 0xc00006fb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36369a0?, 0xc0003aa2a0?}, 0x3b9aca00, 0xc001189d38?, 0x1, 0xc001189b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36369a0, 0xc0003aa2a0}, 0xc001c9fa00, {0xc001b52000, 0x1c}, {0x266e8f0, 0x14}, {0x2686397, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36369a0, 0xc0003aa2a0}, 0xc001c9fa00, {0xc001b52000, 0x1c}, {0x26717ac?, 0xc001210760?}, {0x551353?, 0x4a16cf?}, {0xc0006c4500, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x13b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001c9fa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001c9fa00, 0xc00050c380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3659
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2826 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0013b6ad0, 0x17)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001163500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0013b6b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0019ac400, {0x3612ca0, 0xc0012d2060}, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0019ac400, 0x3b9aca00, 0x0, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2814
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3125 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3124
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2814 [chan receive, 32 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0013b6b00, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2838
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3545 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0008e4750, 0x16)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001be5680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008e47c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001263520, {0x3612ca0, 0xc000669560}, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001263520, 0x3b9aca00, 0x0, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3516
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3284 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc00052a000}, 0xc000a3af50, 0xc000a3af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc00052a000}, 0x0?, 0xc000a3af50, 0xc000a3af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc00052a000?}, 0xc0012a6000?, 0x551c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593005?, 0xc001e4a580?, 0xc001b9c600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3280
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3732 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc00052a000}, 0xc001298750, 0xc001298798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc00052a000}, 0xe0?, 0xc001298750, 0xc001298798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc00052a000?}, 0xc0012987b0?, 0x99cff8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012987d0?, 0x593064?, 0xc000a347e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3712
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3240 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3239
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3226 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001c86060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3217
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3238 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001bbe6d0, 0x16)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00186dec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001bbe700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008e6990, {0x3612ca0, 0xc000669350}, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008e6990, 0x3b9aca00, 0x0, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3227
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3784 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc00052a000}, 0xc000094750, 0xc000094798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc00052a000}, 0x60?, 0xc000094750, 0xc000094798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc00052a000?}, 0xc0000947b0?, 0x99cff8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593005?, 0xc0016fe160?, 0xc000061e60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3757
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3898 [IO wait]:
internal/poll.runtime_pollWait(0x7fba8026b3c0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0017ab680?, 0xc001193800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0017ab680, {0xc001193800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0017ab680, {0xc001193800?, 0x7fba802c4de0?, 0xc0012ccac8?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc001b0c288, {0xc001193800?, 0xc001169938?, 0x41467b?})
	/usr/local/go/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc0012ccac8, {0xc001193800?, 0x0?, 0xc0012ccac8?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0001bb430, {0x3613440, 0xc0012ccac8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0001bb188, {0x3612820, 0xc001b0c288}, 0xc001169980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0001bb188, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc0001bb188, {0xc001415000, 0x1000, 0xc001a86540?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc001bbbd40, {0xc0001c98c0, 0x9, 0x48fdc00?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3611920, 0xc001bbbd40}, {0xc0001c98c0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0001c98c0, 0x9, 0x1169dc0?}, {0x3611920?, 0xc001bbbd40?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0001c9880)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001169fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:2442 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0017b4180)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:2338 +0x65
created by golang.org/x/net/http2.(*ClientConn).goRun in goroutine 3897
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:369 +0x2d

                                                
                                                
goroutine 4577 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc00052a000}, 0xc0016cef50, 0xc0016cef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc00052a000}, 0xe0?, 0xc0016cef50, 0xc0016cef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc00052a000?}, 0xc0016cefb0?, 0x99cff8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x99cfbb?, 0xc0013f4f00?, 0xc001544de0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4556
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 4576 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001c62710, 0x2)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001761500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001c62740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00127c560, {0x3612ca0, 0xc0012786c0}, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00127c560, 0x3b9aca00, 0x0, 0x1, 0xc00052a000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4556
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3932 [IO wait]:
internal/poll.runtime_pollWait(0x7fba8026b4b8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00050db80?, 0xc001500800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00050db80, {0xc001500800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc00050db80, {0xc001500800?, 0xc0005683c0?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000854028, {0xc001500800?, 0xc001500860?, 0x70?})
	/usr/local/go/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc001be8dc8, {0xc001500800?, 0x0?, 0xc001be8dc8?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc000852630, {0x3613440, 0xc001be8dc8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000852388, {0x7fba80324058, 0xc001700000}, 0xc000a56980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc000852388, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc000852388, {0xc0009f9000, 0x1000, 0xc001a86540?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc001be46c0, {0xc0001c9e00, 0x9, 0x48fdc00?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3611920, 0xc001be46c0}, {0xc0001c9e00, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0001c9e00, 0x9, 0xa56dc0?}, {0x3611920?, 0xc001be46c0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0001c9dc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000a56fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:2442 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0013f4000)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:2338 +0x65
created by golang.org/x/net/http2.(*ClientConn).goRun in goroutine 3931
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:369 +0x2d

                                                
                                    

Test pass (172/221)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.1/json-events 12.5
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.07
18 TestDownloadOnly/v1.30.1/DeleteAll 0.14
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 67.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 191.09
29 TestAddons/parallel/Registry 22.5
31 TestAddons/parallel/InspektorGadget 6.32
33 TestAddons/parallel/HelmTiller 10.95
35 TestAddons/parallel/CSI 47.06
36 TestAddons/parallel/Headlamp 14.03
38 TestAddons/parallel/LocalPath 58.18
39 TestAddons/parallel/NvidiaDevicePlugin 5.65
40 TestAddons/parallel/Yakd 5.01
43 TestAddons/serial/GCPAuth/Namespaces 0.13
45 TestCertOptions 66.04
46 TestCertExpiration 281.86
48 TestForceSystemdFlag 68.9
49 TestForceSystemdEnv 65.36
51 TestKVMDriverInstallOrUpdate 4.18
56 TestErrorSpam/start 0.38
57 TestErrorSpam/status 0.74
58 TestErrorSpam/pause 1.52
59 TestErrorSpam/unpause 1.6
60 TestErrorSpam/stop 4.33
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 65.34
65 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.06
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.57
72 TestFunctional/serial/CacheCmd/cache/add_local 2.56
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2.1
77 TestFunctional/serial/CacheCmd/cache/delete 0.1
78 TestFunctional/serial/MinikubeKubectlCmd 0.11
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
80 TestFunctional/serial/ExtraConfig 31.01
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.31
83 TestFunctional/serial/LogsFileCmd 1.36
84 TestFunctional/serial/InvalidService 4.48
86 TestFunctional/parallel/ConfigCmd 0.38
87 TestFunctional/parallel/DashboardCmd 16.47
88 TestFunctional/parallel/DryRun 0.31
89 TestFunctional/parallel/InternationalLanguage 0.16
90 TestFunctional/parallel/StatusCmd 1.19
94 TestFunctional/parallel/ServiceCmdConnect 11.57
95 TestFunctional/parallel/AddonsCmd 0.18
96 TestFunctional/parallel/PersistentVolumeClaim 38.26
98 TestFunctional/parallel/SSHCmd 0.44
99 TestFunctional/parallel/CpCmd 1.23
100 TestFunctional/parallel/MySQL 28.64
101 TestFunctional/parallel/FileSync 0.26
102 TestFunctional/parallel/CertSync 1.44
106 TestFunctional/parallel/NodeLabels 0.09
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
110 TestFunctional/parallel/License 0.84
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
114 TestFunctional/parallel/MountCmd/any-port 10.75
115 TestFunctional/parallel/ServiceCmd/DeployApp 11.23
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
117 TestFunctional/parallel/ProfileCmd/profile_list 0.32
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
119 TestFunctional/parallel/MountCmd/specific-port 1.8
120 TestFunctional/parallel/ServiceCmd/List 0.85
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.84
122 TestFunctional/parallel/MountCmd/VerifyCleanup 1.45
123 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
124 TestFunctional/parallel/ServiceCmd/Format 0.37
125 TestFunctional/parallel/ServiceCmd/URL 0.34
126 TestFunctional/parallel/Version/short 0.05
127 TestFunctional/parallel/Version/components 0.56
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
132 TestFunctional/parallel/ImageCommands/ImageBuild 3.34
133 TestFunctional/parallel/ImageCommands/Setup 1.99
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.77
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.41
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.07
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.7
147 TestFunctional/parallel/ImageCommands/ImageRemove 1.41
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.75
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.04
150 TestFunctional/delete_addon-resizer_images 0.08
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.02
156 TestMultiControlPlane/serial/StartCluster 208.23
157 TestMultiControlPlane/serial/DeployApp 6.4
158 TestMultiControlPlane/serial/PingHostFromPods 1.27
159 TestMultiControlPlane/serial/AddWorkerNode 44.44
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
162 TestMultiControlPlane/serial/CopyFile 12.78
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
168 TestMultiControlPlane/serial/DeleteSecondaryNode 16.94
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
171 TestMultiControlPlane/serial/RestartCluster 348.81
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.41
173 TestMultiControlPlane/serial/AddSecondaryNode 70.37
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
178 TestJSONOutput/start/Command 62.73
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.66
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.6
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.35
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.22
206 TestMainNoArgs 0.04
207 TestMinikubeProfile 87.33
210 TestMountStart/serial/StartWithMountFirst 29.05
211 TestMountStart/serial/VerifyMountFirst 0.39
212 TestMountStart/serial/StartWithMountSecond 25.09
213 TestMountStart/serial/VerifyMountSecond 0.39
214 TestMountStart/serial/DeleteFirst 0.68
215 TestMountStart/serial/VerifyMountPostDelete 0.37
216 TestMountStart/serial/Stop 1.28
217 TestMountStart/serial/RestartStopped 20.37
218 TestMountStart/serial/VerifyMountPostStop 0.37
221 TestMultiNode/serial/FreshStart2Nodes 96.43
222 TestMultiNode/serial/DeployApp2Nodes 5.35
223 TestMultiNode/serial/PingHostFrom2Pods 0.78
224 TestMultiNode/serial/AddNode 38.44
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.22
227 TestMultiNode/serial/CopyFile 7.22
228 TestMultiNode/serial/StopNode 2.27
229 TestMultiNode/serial/StartAfterStop 29.04
231 TestMultiNode/serial/DeleteNode 2.5
233 TestMultiNode/serial/RestartMultiNode 191.29
234 TestMultiNode/serial/ValidateNameConflict 48.3
241 TestScheduledStopUnix 112.62
245 TestRunningBinaryUpgrade 211.16
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
251 TestNoKubernetes/serial/StartWithK8s 87.97
252 TestStoppedBinaryUpgrade/Setup 2.28
253 TestStoppedBinaryUpgrade/Upgrade 152.25
254 TestNoKubernetes/serial/StartWithStopK8s 65.68
255 TestNoKubernetes/serial/Start 28.94
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
257 TestNoKubernetes/serial/ProfileList 27.05
258 TestNoKubernetes/serial/Stop 1.59
259 TestNoKubernetes/serial/StartNoArgs 41.46
260 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
281 TestPause/serial/Start 72.91
x
+
TestDownloadOnly/v1.20.0/json-events (23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-562366 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-562366 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (22.996703339s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (23.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-562366
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-562366: exit status 85 (67.027942ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-562366 | jenkins | v1.33.1 | 20 May 24 12:54 UTC |          |
	|         | -p download-only-562366        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 12:54:13
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 12:54:13.689864  609879 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:54:13.689974  609879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:54:13.689978  609879 out.go:304] Setting ErrFile to fd 2...
	I0520 12:54:13.689982  609879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:54:13.690169  609879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	W0520 12:54:13.690323  609879 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18929-602525/.minikube/config/config.json: open /home/jenkins/minikube-integration/18929-602525/.minikube/config/config.json: no such file or directory
	I0520 12:54:13.690893  609879 out.go:298] Setting JSON to true
	I0520 12:54:13.691787  609879 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9394,"bootTime":1716200260,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 12:54:13.691849  609879 start.go:139] virtualization: kvm guest
	I0520 12:54:13.695095  609879 out.go:97] [download-only-562366] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 12:54:13.697524  609879 out.go:169] MINIKUBE_LOCATION=18929
	W0520 12:54:13.695210  609879 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball: no such file or directory
	I0520 12:54:13.695240  609879 notify.go:220] Checking for updates...
	I0520 12:54:13.702155  609879 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 12:54:13.704371  609879 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 12:54:13.706506  609879 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 12:54:13.708663  609879 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0520 12:54:13.712873  609879 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 12:54:13.713115  609879 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 12:54:13.750017  609879 out.go:97] Using the kvm2 driver based on user configuration
	I0520 12:54:13.750047  609879 start.go:297] selected driver: kvm2
	I0520 12:54:13.750066  609879 start.go:901] validating driver "kvm2" against <nil>
	I0520 12:54:13.750465  609879 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:54:13.750542  609879 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 12:54:13.766365  609879 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 12:54:13.766427  609879 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 12:54:13.766846  609879 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0520 12:54:13.766985  609879 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 12:54:13.767013  609879 cni.go:84] Creating CNI manager for ""
	I0520 12:54:13.767021  609879 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 12:54:13.767028  609879 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 12:54:13.767089  609879 start.go:340] cluster config:
	{Name:download-only-562366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-562366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:54:13.767302  609879 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:54:13.770243  609879 out.go:97] Downloading VM boot image ...
	I0520 12:54:13.770285  609879 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18929-602525/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 12:54:22.816549  609879 out.go:97] Starting "download-only-562366" primary control-plane node in "download-only-562366" cluster
	I0520 12:54:22.816576  609879 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 12:54:22.923335  609879 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0520 12:54:22.923373  609879 cache.go:56] Caching tarball of preloaded images
	I0520 12:54:22.923544  609879 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 12:54:22.926128  609879 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0520 12:54:22.926155  609879 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0520 12:54:23.035500  609879 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-562366 host does not exist
	  To start a cluster, run: "minikube start -p download-only-562366"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-562366
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (12.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-600768 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-600768 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.497259231s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (12.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-600768
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-600768: exit status 85 (67.264284ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-562366 | jenkins | v1.33.1 | 20 May 24 12:54 UTC |                     |
	|         | -p download-only-562366        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| delete  | -p download-only-562366        | download-only-562366 | jenkins | v1.33.1 | 20 May 24 12:54 UTC | 20 May 24 12:54 UTC |
	| start   | -o=json --download-only        | download-only-600768 | jenkins | v1.33.1 | 20 May 24 12:54 UTC |                     |
	|         | -p download-only-600768        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 12:54:37
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 12:54:37.020021  610118 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:54:37.020163  610118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:54:37.020174  610118 out.go:304] Setting ErrFile to fd 2...
	I0520 12:54:37.020181  610118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:54:37.020377  610118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 12:54:37.020944  610118 out.go:298] Setting JSON to true
	I0520 12:54:37.021841  610118 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9417,"bootTime":1716200260,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 12:54:37.021901  610118 start.go:139] virtualization: kvm guest
	I0520 12:54:37.024958  610118 out.go:97] [download-only-600768] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 12:54:37.027374  610118 out.go:169] MINIKUBE_LOCATION=18929
	I0520 12:54:37.025126  610118 notify.go:220] Checking for updates...
	I0520 12:54:37.031608  610118 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 12:54:37.033880  610118 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 12:54:37.036150  610118 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 12:54:37.038433  610118 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0520 12:54:37.042540  610118 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 12:54:37.042789  610118 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 12:54:37.076890  610118 out.go:97] Using the kvm2 driver based on user configuration
	I0520 12:54:37.076919  610118 start.go:297] selected driver: kvm2
	I0520 12:54:37.076925  610118 start.go:901] validating driver "kvm2" against <nil>
	I0520 12:54:37.077294  610118 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:54:37.077397  610118 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18929-602525/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 12:54:37.094048  610118 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 12:54:37.094113  610118 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 12:54:37.094544  610118 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0520 12:54:37.094679  610118 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 12:54:37.094706  610118 cni.go:84] Creating CNI manager for ""
	I0520 12:54:37.094714  610118 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 12:54:37.094724  610118 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 12:54:37.094769  610118 start.go:340] cluster config:
	{Name:download-only-600768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-600768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:54:37.094851  610118 iso.go:125] acquiring lock: {Name:mk4f2429919c34a94c0c65581bbd7b2ba8f42c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:54:37.097806  610118 out.go:97] Starting "download-only-600768" primary control-plane node in "download-only-600768" cluster
	I0520 12:54:37.097846  610118 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:54:37.491992  610118 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 12:54:37.492046  610118 cache.go:56] Caching tarball of preloaded images
	I0520 12:54:37.492220  610118 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:54:37.494985  610118 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0520 12:54:37.495007  610118 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 ...
	I0520 12:54:37.592174  610118 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:a8c8ea593b2bc93a46ce7b040a44f86d -> /home/jenkins/minikube-integration/18929-602525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-600768 host does not exist
	  To start a cluster, run: "minikube start -p download-only-600768"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-600768
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-910817 --alsologtostderr --binary-mirror http://127.0.0.1:44813 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-910817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-910817
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (67.56s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-866828 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-866828 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m6.719384412s)
helpers_test.go:175: Cleaning up "offline-crio-866828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-866828
--- PASS: TestOffline (67.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-840762
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-840762: exit status 85 (51.751974ms)

                                                
                                                
-- stdout --
	* Profile "addons-840762" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-840762"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-840762
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-840762: exit status 85 (52.909068ms)

                                                
                                                
-- stdout --
	* Profile "addons-840762" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-840762"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (191.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-840762 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-840762 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m11.092441433s)
--- PASS: TestAddons/Setup (191.09s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 11.512593ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-jwvq5" [11f262c9-d0cf-456f-bfd1-fa66f364ffaf] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005842745s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xpxjv" [ca35b86e-6424-40e0-a0d6-cbd41f0ccab0] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00679887s
addons_test.go:340: (dbg) Run:  kubectl --context addons-840762 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-840762 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-840762 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (10.647308589s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-840762 ip
2024/05/20 12:58:23 [DEBUG] GET http://192.168.39.19:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-840762 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (22.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.32s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4r2zg" [20112e09-b29e-4ddb-96ef-4d06088304a4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00504129s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-840762
--- PASS: TestAddons/parallel/InspektorGadget (6.32s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.95s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 2.342426ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-9z85l" [a58791b3-4277-403d-9b31-4f938890905e] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004568083s
addons_test.go:473: (dbg) Run:  kubectl --context addons-840762 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-840762 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.347633895s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-840762 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.95s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 21.1368ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-840762 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-840762 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9af51654-cb9f-422e-b702-34c99a24bdc7] Pending
helpers_test.go:344: "task-pv-pod" [9af51654-cb9f-422e-b702-34c99a24bdc7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9af51654-cb9f-422e-b702-34c99a24bdc7] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004445572s
addons_test.go:584: (dbg) Run:  kubectl --context addons-840762 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-840762 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-840762 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-840762 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-840762 delete pod task-pv-pod: (1.251315773s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-840762 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-840762 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-840762 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3a7f9cac-a34b-44eb-b058-6e1bfa8b125a] Pending
helpers_test.go:344: "task-pv-pod-restore" [3a7f9cac-a34b-44eb-b058-6e1bfa8b125a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3a7f9cac-a34b-44eb-b058-6e1bfa8b125a] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003959635s
addons_test.go:626: (dbg) Run:  kubectl --context addons-840762 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-840762 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-840762 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-840762 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-840762 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.802649604s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-840762 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-840762 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-840762 --alsologtostderr -v=1: (1.022285248s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-5k6z6" [c7973bac-822b-4c44-a10c-65bcfdb5f17d] Pending
helpers_test.go:344: "headlamp-68456f997b-5k6z6" [c7973bac-822b-4c44-a10c-65bcfdb5f17d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-5k6z6" [c7973bac-822b-4c44-a10c-65bcfdb5f17d] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004178639s
--- PASS: TestAddons/parallel/Headlamp (14.03s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.18s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-840762 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-840762 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840762 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [429dc7e0-07e3-454c-9504-ac4e03b1842d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [429dc7e0-07e3-454c-9504-ac4e03b1842d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [429dc7e0-07e3-454c-9504-ac4e03b1842d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.007960245s
addons_test.go:891: (dbg) Run:  kubectl --context addons-840762 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-840762 ssh "cat /opt/local-path-provisioner/pvc-ef6f8a93-1567-44f6-8095-fb964ae1388e_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-840762 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-840762 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-840762 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-840762 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.317837784s)
--- PASS: TestAddons/parallel/LocalPath (58.18s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-w5d66" [88344eab-652a-4d9d-9f7f-171aa2936225] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.008207977s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-840762
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-hgp7b" [98ccbc95-97f1-48f6-99a4-6c335bd4b99d] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004197774s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-840762 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-840762 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestCertOptions (66.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-565318 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-565318 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m4.745037961s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-565318 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-565318 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-565318 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-565318" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-565318
--- PASS: TestCertOptions (66.04s)

                                                
                                    
x
+
TestCertExpiration (281.86s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-449986 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-449986 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (39.426173809s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-449986 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-449986 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m1.276434628s)
helpers_test.go:175: Cleaning up "cert-expiration-449986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-449986
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-449986: (1.154302445s)
--- PASS: TestCertExpiration (281.86s)

                                                
                                    
x
+
TestForceSystemdFlag (68.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-660615 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-660615 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m7.856264924s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-660615 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-660615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-660615
--- PASS: TestForceSystemdFlag (68.90s)

                                                
                                    
x
+
TestForceSystemdEnv (65.36s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-934719 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-934719 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m4.542151901s)
helpers_test.go:175: Cleaning up "force-systemd-env-934719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-934719
--- PASS: TestForceSystemdEnv (65.36s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.18s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 unpause
--- PASS: TestErrorSpam/unpause (1.60s)

                                                
                                    
x
+
TestErrorSpam/stop (4.33s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 stop: (1.612398464s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 stop: (1.401682343s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-609784 --log_dir /tmp/nospam-609784 stop: (1.318733771s)
--- PASS: TestErrorSpam/stop (4.33s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18929-602525/.minikube/files/etc/test/nested/copy/609867/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.34s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-694790 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0520 13:08:01.807865  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:08:01.813701  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:08:01.824017  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:08:01.844391  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:08:01.884714  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:08:01.965153  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:08:02.125596  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:08:02.446187  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:08:03.087188  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:08:04.367741  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:08:06.928674  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:08:12.049158  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:08:22.289411  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-694790 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m5.33660412s)
--- PASS: TestFunctional/serial/StartWithProxy (65.34s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-694790 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-694790 cache add registry.k8s.io/pause:3.1: (1.441233209s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-694790 cache add registry.k8s.io/pause:3.3: (1.539124476s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-694790 cache add registry.k8s.io/pause:latest: (1.594054725s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-694790 /tmp/TestFunctionalserialCacheCmdcacheadd_local4230097966/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 cache add minikube-local-cache-test:functional-694790
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-694790 cache add minikube-local-cache-test:functional-694790: (2.123354519s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 cache delete minikube-local-cache-test:functional-694790
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-694790
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-694790 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (209.34364ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-694790 cache reload: (1.403782938s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 kubectl -- --context functional-694790 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-694790 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.01s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-694790 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-694790 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.005227307s)
functional_test.go:757: restart took 31.005355684s for "functional-694790" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.01s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-694790 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-694790 logs: (1.306256137s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 logs --file /tmp/TestFunctionalserialLogsFileCmd151410509/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-694790 logs --file /tmp/TestFunctionalserialLogsFileCmd151410509/001/logs.txt: (1.362484601s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-694790 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-694790
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-694790: exit status 115 (292.624292ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.165:30183 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-694790 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-694790 config get cpus: exit status 14 (66.377637ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-694790 config get cpus: exit status 14 (48.97051ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-694790 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-694790 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 621975: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-694790 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-694790 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (157.120054ms)

                                                
                                                
-- stdout --
	* [functional-694790] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18929
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:27:02.091382  621868 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:27:02.091482  621868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:27:02.091488  621868 out.go:304] Setting ErrFile to fd 2...
	I0520 13:27:02.091493  621868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:27:02.091679  621868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:27:02.092249  621868 out.go:298] Setting JSON to false
	I0520 13:27:02.093217  621868 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11362,"bootTime":1716200260,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 13:27:02.093316  621868 start.go:139] virtualization: kvm guest
	I0520 13:27:02.096357  621868 out.go:177] * [functional-694790] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 13:27:02.098601  621868 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 13:27:02.098646  621868 notify.go:220] Checking for updates...
	I0520 13:27:02.100912  621868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 13:27:02.103125  621868 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 13:27:02.105485  621868 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:27:02.107629  621868 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 13:27:02.109813  621868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 13:27:02.112301  621868 config.go:182] Loaded profile config "functional-694790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:27:02.112949  621868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:27:02.113026  621868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:27:02.129289  621868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37689
	I0520 13:27:02.129731  621868 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:27:02.130368  621868 main.go:141] libmachine: Using API Version  1
	I0520 13:27:02.130390  621868 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:27:02.130765  621868 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:27:02.130980  621868 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:27:02.131306  621868 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 13:27:02.131743  621868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:27:02.131801  621868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:27:02.147203  621868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45443
	I0520 13:27:02.147814  621868 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:27:02.148526  621868 main.go:141] libmachine: Using API Version  1
	I0520 13:27:02.148554  621868 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:27:02.148925  621868 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:27:02.149157  621868 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:27:02.185346  621868 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 13:27:02.187697  621868 start.go:297] selected driver: kvm2
	I0520 13:27:02.187727  621868 start.go:901] validating driver "kvm2" against &{Name:functional-694790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-694790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:27:02.187938  621868 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 13:27:02.191267  621868 out.go:177] 
	W0520 13:27:02.193357  621868 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0520 13:27:02.195367  621868 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-694790 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-694790 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-694790 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (158.441094ms)

                                                
                                                
-- stdout --
	* [functional-694790] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18929
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:27:01.928225  621841 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:27:01.928362  621841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:27:01.928373  621841 out.go:304] Setting ErrFile to fd 2...
	I0520 13:27:01.928381  621841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:27:01.928681  621841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:27:01.929279  621841 out.go:298] Setting JSON to false
	I0520 13:27:01.930246  621841 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11362,"bootTime":1716200260,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 13:27:01.930317  621841 start.go:139] virtualization: kvm guest
	I0520 13:27:01.933195  621841 out.go:177] * [functional-694790] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0520 13:27:01.935788  621841 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 13:27:01.935815  621841 notify.go:220] Checking for updates...
	I0520 13:27:01.937971  621841 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 13:27:01.940175  621841 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	I0520 13:27:01.942709  621841 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	I0520 13:27:01.945267  621841 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 13:27:01.947354  621841 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 13:27:01.950085  621841 config.go:182] Loaded profile config "functional-694790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:27:01.950756  621841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:27:01.950859  621841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:27:01.967419  621841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43835
	I0520 13:27:01.967939  621841 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:27:01.968490  621841 main.go:141] libmachine: Using API Version  1
	I0520 13:27:01.968523  621841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:27:01.968880  621841 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:27:01.969169  621841 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:27:01.969499  621841 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 13:27:01.969864  621841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:27:01.969907  621841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:27:01.985716  621841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42839
	I0520 13:27:01.986284  621841 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:27:01.986973  621841 main.go:141] libmachine: Using API Version  1
	I0520 13:27:01.987015  621841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:27:01.987568  621841 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:27:01.987904  621841 main.go:141] libmachine: (functional-694790) Calling .DriverName
	I0520 13:27:02.025893  621841 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0520 13:27:02.027816  621841 start.go:297] selected driver: kvm2
	I0520 13:27:02.027864  621841 start.go:901] validating driver "kvm2" against &{Name:functional-694790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-694790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:27:02.028021  621841 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 13:27:02.031472  621841 out.go:177] 
	W0520 13:27:02.033682  621841 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0520 13:27:02.035748  621841 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-694790 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-694790 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-4p25w" [0bd66862-320f-45ea-9eb6-0e93315b5797] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-4p25w" [0bd66862-320f-45ea-9eb6-0e93315b5797] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004632636s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.165:32255
functional_test.go:1671: http://192.168.39.165:32255: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-4p25w

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.165:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.165:32255
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [08e996c5-cd15-4e7c-8ece-c092f7ce1431] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004680009s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-694790 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-694790 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-694790 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-694790 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-694790 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6873aeac-a725-4af7-a074-16140abd0e9f] Pending
helpers_test.go:344: "sp-pod" [6873aeac-a725-4af7-a074-16140abd0e9f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6873aeac-a725-4af7-a074-16140abd0e9f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.004763179s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-694790 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-694790 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-694790 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ce83a25f-c3df-415c-8ceb-603225976ffa] Pending
helpers_test.go:344: "sp-pod" [ce83a25f-c3df-415c-8ceb-603225976ffa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ce83a25f-c3df-415c-8ceb-603225976ffa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004983242s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-694790 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.26s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh -n functional-694790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 cp functional-694790:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd381300342/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh -n functional-694790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh -n functional-694790 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-694790 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-kwgt7" [421055c7-0212-4e55-9551-985e96f65dc6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-kwgt7" [421055c7-0212-4e55-9551-985e96f65dc6] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.006875722s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-694790 exec mysql-64454c8b5c-kwgt7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-694790 exec mysql-64454c8b5c-kwgt7 -- mysql -ppassword -e "show databases;": exit status 1 (331.282908ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-694790 exec mysql-64454c8b5c-kwgt7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-694790 exec mysql-64454c8b5c-kwgt7 -- mysql -ppassword -e "show databases;": exit status 1 (138.787544ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-694790 exec mysql-64454c8b5c-kwgt7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-694790 exec mysql-64454c8b5c-kwgt7 -- mysql -ppassword -e "show databases;": exit status 1 (217.193817ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-694790 exec mysql-64454c8b5c-kwgt7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.64s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/609867/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "sudo cat /etc/test/nested/copy/609867/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/609867.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "sudo cat /etc/ssl/certs/609867.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/609867.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "sudo cat /usr/share/ca-certificates/609867.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/6098672.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "sudo cat /etc/ssl/certs/6098672.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/6098672.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "sudo cat /usr/share/ca-certificates/6098672.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-694790 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-694790 ssh "sudo systemctl is-active docker": exit status 1 (245.258612ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-694790 ssh "sudo systemctl is-active containerd": exit status 1 (240.976527ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-694790 /tmp/TestFunctionalparallelMountCmdany-port1361147987/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1716211619372292660" to /tmp/TestFunctionalparallelMountCmdany-port1361147987/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1716211619372292660" to /tmp/TestFunctionalparallelMountCmdany-port1361147987/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1716211619372292660" to /tmp/TestFunctionalparallelMountCmdany-port1361147987/001/test-1716211619372292660
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-694790 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (258.576287ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 20 13:26 created-by-test
-rw-r--r-- 1 docker docker 24 May 20 13:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 20 13:26 test-1716211619372292660
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh cat /mount-9p/test-1716211619372292660
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-694790 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1e852e2b-77fb-4f4c-bfe9-279208884dc2] Pending
helpers_test.go:344: "busybox-mount" [1e852e2b-77fb-4f4c-bfe9-279208884dc2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1e852e2b-77fb-4f4c-bfe9-279208884dc2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1e852e2b-77fb-4f4c-bfe9-279208884dc2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.003668576s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-694790 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-694790 /tmp/TestFunctionalparallelMountCmdany-port1361147987/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-694790 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-694790 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-mfzk8" [c82d9234-9b83-4123-b75d-7a9e721fb74d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-mfzk8" [c82d9234-9b83-4123-b75d-7a9e721fb74d] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.005075541s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "272.507488ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "47.478394ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "250.980964ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "51.837869ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-694790 /tmp/TestFunctionalparallelMountCmdspecific-port1047098385/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-694790 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (195.360183ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-694790 /tmp/TestFunctionalparallelMountCmdspecific-port1047098385/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-694790 ssh "sudo umount -f /mount-9p": exit status 1 (199.971399ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-694790 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-694790 /tmp/TestFunctionalparallelMountCmdspecific-port1047098385/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 service list -o json
functional_test.go:1490: Took "843.212822ms" to run "out/minikube-linux-amd64 -p functional-694790 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-694790 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3946102078/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-694790 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3946102078/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-694790 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3946102078/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-694790 ssh "findmnt -T" /mount1: exit status 1 (229.669522ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-694790 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-694790 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3946102078/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-694790 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3946102078/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-694790 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3946102078/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.165:31344
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.165:31344
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-694790 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-694790
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-694790
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-694790 image ls --format short --alsologtostderr:
I0520 13:27:41.420481  623812 out.go:291] Setting OutFile to fd 1 ...
I0520 13:27:41.420661  623812 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 13:27:41.420675  623812 out.go:304] Setting ErrFile to fd 2...
I0520 13:27:41.420680  623812 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 13:27:41.420975  623812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
I0520 13:27:41.421795  623812 config.go:182] Loaded profile config "functional-694790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 13:27:41.421945  623812 config.go:182] Loaded profile config "functional-694790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 13:27:41.422500  623812 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 13:27:41.422585  623812 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 13:27:41.437923  623812 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
I0520 13:27:41.438475  623812 main.go:141] libmachine: () Calling .GetVersion
I0520 13:27:41.439072  623812 main.go:141] libmachine: Using API Version  1
I0520 13:27:41.439086  623812 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 13:27:41.439428  623812 main.go:141] libmachine: () Calling .GetMachineName
I0520 13:27:41.439600  623812 main.go:141] libmachine: (functional-694790) Calling .GetState
I0520 13:27:41.441737  623812 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 13:27:41.441805  623812 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 13:27:41.457150  623812 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
I0520 13:27:41.457734  623812 main.go:141] libmachine: () Calling .GetVersion
I0520 13:27:41.458255  623812 main.go:141] libmachine: Using API Version  1
I0520 13:27:41.458278  623812 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 13:27:41.458635  623812 main.go:141] libmachine: () Calling .GetMachineName
I0520 13:27:41.458846  623812 main.go:141] libmachine: (functional-694790) Calling .DriverName
I0520 13:27:41.459041  623812 ssh_runner.go:195] Run: systemctl --version
I0520 13:27:41.459065  623812 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
I0520 13:27:41.462310  623812 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
I0520 13:27:41.462880  623812 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
I0520 13:27:41.462912  623812 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
I0520 13:27:41.463104  623812 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
I0520 13:27:41.463332  623812 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
I0520 13:27:41.463486  623812 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
I0520 13:27:41.463664  623812 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/functional-694790/id_rsa Username:docker}
I0520 13:27:41.548123  623812 ssh_runner.go:195] Run: sudo crictl images --output json
I0520 13:27:41.597258  623812 main.go:141] libmachine: Making call to close driver server
I0520 13:27:41.597296  623812 main.go:141] libmachine: (functional-694790) Calling .Close
I0520 13:27:41.597560  623812 main.go:141] libmachine: Successfully made call to close driver server
I0520 13:27:41.597574  623812 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 13:27:41.597589  623812 main.go:141] libmachine: Making call to close driver server
I0520 13:27:41.597597  623812 main.go:141] libmachine: (functional-694790) Calling .Close
I0520 13:27:41.597858  623812 main.go:141] libmachine: Successfully made call to close driver server
I0520 13:27:41.597875  623812 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-694790 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.30.1            | 25a1387cdab82 | 112MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | e784f4560448b | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.30.1            | 91be940803172 | 118MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/google-containers/addon-resizer  | functional-694790  | ffd4cfbbe753e | 34.1MB |
| localhost/minikube-local-cache-test     | functional-694790  | 3bc2985b5a404 | 3.33kB |
| registry.k8s.io/kube-proxy              | v1.30.1            | 747097150317f | 85.9MB |
| registry.k8s.io/kube-scheduler          | v1.30.1            | a52dc94f0a912 | 63MB   |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-694790 image ls --format table --alsologtostderr:
I0520 13:27:41.876484  623923 out.go:291] Setting OutFile to fd 1 ...
I0520 13:27:41.876749  623923 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 13:27:41.876762  623923 out.go:304] Setting ErrFile to fd 2...
I0520 13:27:41.876768  623923 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 13:27:41.876968  623923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
I0520 13:27:41.877554  623923 config.go:182] Loaded profile config "functional-694790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 13:27:41.877656  623923 config.go:182] Loaded profile config "functional-694790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 13:27:41.877994  623923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 13:27:41.878057  623923 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 13:27:41.892999  623923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37209
I0520 13:27:41.893444  623923 main.go:141] libmachine: () Calling .GetVersion
I0520 13:27:41.894051  623923 main.go:141] libmachine: Using API Version  1
I0520 13:27:41.894080  623923 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 13:27:41.894550  623923 main.go:141] libmachine: () Calling .GetMachineName
I0520 13:27:41.894751  623923 main.go:141] libmachine: (functional-694790) Calling .GetState
I0520 13:27:41.897036  623923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 13:27:41.897083  623923 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 13:27:41.912082  623923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45483
I0520 13:27:41.912469  623923 main.go:141] libmachine: () Calling .GetVersion
I0520 13:27:41.912951  623923 main.go:141] libmachine: Using API Version  1
I0520 13:27:41.912967  623923 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 13:27:41.913320  623923 main.go:141] libmachine: () Calling .GetMachineName
I0520 13:27:41.913530  623923 main.go:141] libmachine: (functional-694790) Calling .DriverName
I0520 13:27:41.913741  623923 ssh_runner.go:195] Run: systemctl --version
I0520 13:27:41.913768  623923 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
I0520 13:27:41.916538  623923 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
I0520 13:27:41.916931  623923 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
I0520 13:27:41.916954  623923 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
I0520 13:27:41.917109  623923 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
I0520 13:27:41.917288  623923 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
I0520 13:27:41.917437  623923 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
I0520 13:27:41.917595  623923 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/functional-694790/id_rsa Username:docker}
I0520 13:27:41.995416  623923 ssh_runner.go:195] Run: sudo crictl images --output json
I0520 13:27:42.037892  623923 main.go:141] libmachine: Making call to close driver server
I0520 13:27:42.037915  623923 main.go:141] libmachine: (functional-694790) Calling .Close
I0520 13:27:42.038180  623923 main.go:141] libmachine: Successfully made call to close driver server
I0520 13:27:42.038194  623923 main.go:141] libmachine: (functional-694790) DBG | Closing plugin on server side
I0520 13:27:42.038201  623923 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 13:27:42.038215  623923 main.go:141] libmachine: Making call to close driver server
I0520 13:27:42.038228  623923 main.go:141] libmachine: (functional-694790) Calling .Close
I0520 13:27:42.038488  623923 main.go:141] libmachine: Successfully made call to close driver server
I0520 13:27:42.038502  623923 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 13:27:42.038533  623923 main.go:141] libmachine: (functional-694790) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-694790 image ls --format json --alsologtostderr:
[{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"re
poTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d4
8ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"e784f4560448b14a66f55c26e1b4dad2c2877cc73d001b7cd0b18e24a700a070","repoDigests":["docker.io/library/nginx@sha256:a484819eb60211f5299034ac80f6a681b06f89e65866ce91f356ed7c72af059c","docker.io/library/nginx@sha256:e688fed0b0c7513a63364959e7d389c37ac8ecac7a6c6a31455eca2f5a71ab8b"],"repoTags":["docker.io/library/nginx:latest"],"size":"191805953"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-694790"],"size":"34114467"},{"id":"3bc2985b5a40474be4dfa1a64adddadd3e00af4794c15b44f31
143bbea37987a","repoDigests":["localhost/minikube-local-cache-test@sha256:d630ab5c0c2b1543bf0418093cc83b2d843b1a3f496fb496aa6fe0ed33d2a3fb"],"repoTags":["localhost/minikube-local-cache-test:functional-694790"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":["registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036","registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"63026504"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9
ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/c
oredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea","registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117601759"},{"id":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52","registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"112170310"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":["registr
y.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"85933465"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-694790 image ls --format json --alsologtostderr:
I0520 13:27:41.650243  623859 out.go:291] Setting OutFile to fd 1 ...
I0520 13:27:41.650534  623859 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 13:27:41.650547  623859 out.go:304] Setting ErrFile to fd 2...
I0520 13:27:41.650551  623859 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 13:27:41.650737  623859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
I0520 13:27:41.651334  623859 config.go:182] Loaded profile config "functional-694790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 13:27:41.651434  623859 config.go:182] Loaded profile config "functional-694790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 13:27:41.651772  623859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 13:27:41.651841  623859 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 13:27:41.666601  623859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39737
I0520 13:27:41.667239  623859 main.go:141] libmachine: () Calling .GetVersion
I0520 13:27:41.667963  623859 main.go:141] libmachine: Using API Version  1
I0520 13:27:41.668127  623859 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 13:27:41.668525  623859 main.go:141] libmachine: () Calling .GetMachineName
I0520 13:27:41.668745  623859 main.go:141] libmachine: (functional-694790) Calling .GetState
I0520 13:27:41.670978  623859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 13:27:41.671036  623859 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 13:27:41.685631  623859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43281
I0520 13:27:41.686111  623859 main.go:141] libmachine: () Calling .GetVersion
I0520 13:27:41.686606  623859 main.go:141] libmachine: Using API Version  1
I0520 13:27:41.686633  623859 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 13:27:41.687086  623859 main.go:141] libmachine: () Calling .GetMachineName
I0520 13:27:41.687271  623859 main.go:141] libmachine: (functional-694790) Calling .DriverName
I0520 13:27:41.687444  623859 ssh_runner.go:195] Run: systemctl --version
I0520 13:27:41.687470  623859 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
I0520 13:27:41.690254  623859 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
I0520 13:27:41.690646  623859 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
I0520 13:27:41.690681  623859 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
I0520 13:27:41.690826  623859 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
I0520 13:27:41.691083  623859 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
I0520 13:27:41.691242  623859 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
I0520 13:27:41.691366  623859 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/functional-694790/id_rsa Username:docker}
I0520 13:27:41.772107  623859 ssh_runner.go:195] Run: sudo crictl images --output json
I0520 13:27:41.823118  623859 main.go:141] libmachine: Making call to close driver server
I0520 13:27:41.823148  623859 main.go:141] libmachine: (functional-694790) Calling .Close
I0520 13:27:41.823431  623859 main.go:141] libmachine: Successfully made call to close driver server
I0520 13:27:41.823451  623859 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 13:27:41.823459  623859 main.go:141] libmachine: Making call to close driver server
I0520 13:27:41.823470  623859 main.go:141] libmachine: (functional-694790) Calling .Close
I0520 13:27:41.823732  623859 main.go:141] libmachine: Successfully made call to close driver server
I0520 13:27:41.823748  623859 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-694790 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-694790
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 3bc2985b5a40474be4dfa1a64adddadd3e00af4794c15b44f31143bbea37987a
repoDigests:
- localhost/minikube-local-cache-test@sha256:d630ab5c0c2b1543bf0418093cc83b2d843b1a3f496fb496aa6fe0ed33d2a3fb
repoTags:
- localhost/minikube-local-cache-test:functional-694790
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa
- registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "85933465"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036
- registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "63026504"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: e784f4560448b14a66f55c26e1b4dad2c2877cc73d001b7cd0b18e24a700a070
repoDigests:
- docker.io/library/nginx@sha256:a484819eb60211f5299034ac80f6a681b06f89e65866ce91f356ed7c72af059c
- docker.io/library/nginx@sha256:e688fed0b0c7513a63364959e7d389c37ac8ecac7a6c6a31455eca2f5a71ab8b
repoTags:
- docker.io/library/nginx:latest
size: "191805953"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea
- registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117601759"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52
- registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "112170310"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-694790 image ls --format yaml --alsologtostderr:
I0520 13:27:41.421831  623813 out.go:291] Setting OutFile to fd 1 ...
I0520 13:27:41.421936  623813 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 13:27:41.421946  623813 out.go:304] Setting ErrFile to fd 2...
I0520 13:27:41.421952  623813 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 13:27:41.422261  623813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
I0520 13:27:41.422962  623813 config.go:182] Loaded profile config "functional-694790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 13:27:41.423069  623813 config.go:182] Loaded profile config "functional-694790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 13:27:41.423433  623813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 13:27:41.423484  623813 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 13:27:41.438114  623813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44797
I0520 13:27:41.438658  623813 main.go:141] libmachine: () Calling .GetVersion
I0520 13:27:41.439242  623813 main.go:141] libmachine: Using API Version  1
I0520 13:27:41.439266  623813 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 13:27:41.439723  623813 main.go:141] libmachine: () Calling .GetMachineName
I0520 13:27:41.439905  623813 main.go:141] libmachine: (functional-694790) Calling .GetState
I0520 13:27:41.442231  623813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 13:27:41.442279  623813 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 13:27:41.457144  623813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43079
I0520 13:27:41.457569  623813 main.go:141] libmachine: () Calling .GetVersion
I0520 13:27:41.458109  623813 main.go:141] libmachine: Using API Version  1
I0520 13:27:41.458129  623813 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 13:27:41.458551  623813 main.go:141] libmachine: () Calling .GetMachineName
I0520 13:27:41.458776  623813 main.go:141] libmachine: (functional-694790) Calling .DriverName
I0520 13:27:41.459011  623813 ssh_runner.go:195] Run: systemctl --version
I0520 13:27:41.459052  623813 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
I0520 13:27:41.462411  623813 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
I0520 13:27:41.462880  623813 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
I0520 13:27:41.462909  623813 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
I0520 13:27:41.463228  623813 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
I0520 13:27:41.463437  623813 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
I0520 13:27:41.463570  623813 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
I0520 13:27:41.463723  623813 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/functional-694790/id_rsa Username:docker}
I0520 13:27:41.548433  623813 ssh_runner.go:195] Run: sudo crictl images --output json
I0520 13:27:41.599373  623813 main.go:141] libmachine: Making call to close driver server
I0520 13:27:41.599390  623813 main.go:141] libmachine: (functional-694790) Calling .Close
I0520 13:27:41.599701  623813 main.go:141] libmachine: (functional-694790) DBG | Closing plugin on server side
I0520 13:27:41.599720  623813 main.go:141] libmachine: Successfully made call to close driver server
I0520 13:27:41.599736  623813 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 13:27:41.599750  623813 main.go:141] libmachine: Making call to close driver server
I0520 13:27:41.599771  623813 main.go:141] libmachine: (functional-694790) Calling .Close
I0520 13:27:41.600151  623813 main.go:141] libmachine: (functional-694790) DBG | Closing plugin on server side
I0520 13:27:41.600185  623813 main.go:141] libmachine: Successfully made call to close driver server
I0520 13:27:41.600199  623813 main.go:141] libmachine: Making call to close connection to plugin binary
E0520 13:27:41.601048  623813 logFile.go:53] failed to close the audit log: invalid argument
W0520 13:27:41.601064  623813 root.go:91] failed to log command end to audit: failed to convert logs to rows: failed to unmarshal "{\"specversion\":\"1.0\",\"id\":\"bd6d74e7-7e46-407f-8267-d8606c664e46\",\"source\":\"https://minikube.sigs.k8s.io/": unexpected end of JSON input
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-694790 ssh pgrep buildkitd: exit status 1 (200.973323ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image build -t localhost/my-image:functional-694790 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-694790 image build -t localhost/my-image:functional-694790 testdata/build --alsologtostderr: (2.927931775s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-694790 image build -t localhost/my-image:functional-694790 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9fe5d5226fc
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-694790
--> d1fe80cc968
Successfully tagged localhost/my-image:functional-694790
d1fe80cc968e6fac59d9a47fd7f3f9cda33fa88694d8db424e255996ab39077c
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-694790 image build -t localhost/my-image:functional-694790 testdata/build --alsologtostderr:
I0520 13:27:41.854204  623912 out.go:291] Setting OutFile to fd 1 ...
I0520 13:27:41.854629  623912 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 13:27:41.854644  623912 out.go:304] Setting ErrFile to fd 2...
I0520 13:27:41.854650  623912 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 13:27:41.854926  623912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
I0520 13:27:41.855682  623912 config.go:182] Loaded profile config "functional-694790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 13:27:41.856362  623912 config.go:182] Loaded profile config "functional-694790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 13:27:41.856740  623912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 13:27:41.856820  623912 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 13:27:41.872709  623912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
I0520 13:27:41.873207  623912 main.go:141] libmachine: () Calling .GetVersion
I0520 13:27:41.873830  623912 main.go:141] libmachine: Using API Version  1
I0520 13:27:41.873854  623912 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 13:27:41.874206  623912 main.go:141] libmachine: () Calling .GetMachineName
I0520 13:27:41.874447  623912 main.go:141] libmachine: (functional-694790) Calling .GetState
I0520 13:27:41.876349  623912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 13:27:41.876403  623912 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 13:27:41.891213  623912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40149
I0520 13:27:41.891690  623912 main.go:141] libmachine: () Calling .GetVersion
I0520 13:27:41.892193  623912 main.go:141] libmachine: Using API Version  1
I0520 13:27:41.892219  623912 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 13:27:41.892633  623912 main.go:141] libmachine: () Calling .GetMachineName
I0520 13:27:41.892837  623912 main.go:141] libmachine: (functional-694790) Calling .DriverName
I0520 13:27:41.893076  623912 ssh_runner.go:195] Run: systemctl --version
I0520 13:27:41.893121  623912 main.go:141] libmachine: (functional-694790) Calling .GetSSHHostname
I0520 13:27:41.896411  623912 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
I0520 13:27:41.896942  623912 main.go:141] libmachine: (functional-694790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:d8:e3", ip: ""} in network mk-functional-694790: {Iface:virbr1 ExpiryTime:2024-05-20 14:07:34 +0000 UTC Type:0 Mac:52:54:00:0a:d8:e3 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-694790 Clientid:01:52:54:00:0a:d8:e3}
I0520 13:27:41.896978  623912 main.go:141] libmachine: (functional-694790) DBG | domain functional-694790 has defined IP address 192.168.39.165 and MAC address 52:54:00:0a:d8:e3 in network mk-functional-694790
I0520 13:27:41.897121  623912 main.go:141] libmachine: (functional-694790) Calling .GetSSHPort
I0520 13:27:41.897310  623912 main.go:141] libmachine: (functional-694790) Calling .GetSSHKeyPath
I0520 13:27:41.897488  623912 main.go:141] libmachine: (functional-694790) Calling .GetSSHUsername
I0520 13:27:41.897644  623912 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/functional-694790/id_rsa Username:docker}
I0520 13:27:41.975213  623912 build_images.go:161] Building image from path: /tmp/build.3338204812.tar
I0520 13:27:41.975321  623912 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0520 13:27:41.985262  623912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3338204812.tar
I0520 13:27:41.990285  623912 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3338204812.tar: stat -c "%s %y" /var/lib/minikube/build/build.3338204812.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3338204812.tar': No such file or directory
I0520 13:27:41.990313  623912 ssh_runner.go:362] scp /tmp/build.3338204812.tar --> /var/lib/minikube/build/build.3338204812.tar (3072 bytes)
I0520 13:27:42.015465  623912 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3338204812
I0520 13:27:42.037379  623912 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3338204812 -xf /var/lib/minikube/build/build.3338204812.tar
I0520 13:27:42.049097  623912 crio.go:315] Building image: /var/lib/minikube/build/build.3338204812
I0520 13:27:42.049163  623912 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-694790 /var/lib/minikube/build/build.3338204812 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0520 13:27:44.706905  623912 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-694790 /var/lib/minikube/build/build.3338204812 --cgroup-manager=cgroupfs: (2.657698959s)
I0520 13:27:44.706984  623912 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3338204812
I0520 13:27:44.717789  623912 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3338204812.tar
I0520 13:27:44.727447  623912 build_images.go:217] Built localhost/my-image:functional-694790 from /tmp/build.3338204812.tar
I0520 13:27:44.727482  623912 build_images.go:133] succeeded building to: functional-694790
I0520 13:27:44.727486  623912 build_images.go:134] failed building to: 
I0520 13:27:44.727511  623912 main.go:141] libmachine: Making call to close driver server
I0520 13:27:44.727523  623912 main.go:141] libmachine: (functional-694790) Calling .Close
I0520 13:27:44.727827  623912 main.go:141] libmachine: Successfully made call to close driver server
I0520 13:27:44.727867  623912 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 13:27:44.727869  623912 main.go:141] libmachine: (functional-694790) DBG | Closing plugin on server side
I0520 13:27:44.727878  623912 main.go:141] libmachine: Making call to close driver server
I0520 13:27:44.727886  623912 main.go:141] libmachine: (functional-694790) Calling .Close
I0520 13:27:44.728134  623912 main.go:141] libmachine: Successfully made call to close driver server
I0520 13:27:44.728148  623912 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.965227363s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-694790
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image load --daemon gcr.io/google-containers/addon-resizer:functional-694790 --alsologtostderr
2024/05/20 13:27:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-694790 image load --daemon gcr.io/google-containers/addon-resizer:functional-694790 --alsologtostderr: (5.139352233s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image load --daemon gcr.io/google-containers/addon-resizer:functional-694790 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-694790 image load --daemon gcr.io/google-containers/addon-resizer:functional-694790 --alsologtostderr: (4.145659066s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.874930365s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-694790
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image load --daemon gcr.io/google-containers/addon-resizer:functional-694790 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-694790 image load --daemon gcr.io/google-containers/addon-resizer:functional-694790 --alsologtostderr: (4.787100777s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image save gcr.io/google-containers/addon-resizer:functional-694790 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-694790 image save gcr.io/google-containers/addon-resizer:functional-694790 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (3.703123304s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image rm gcr.io/google-containers/addon-resizer:functional-694790 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-694790 image rm gcr.io/google-containers/addon-resizer:functional-694790 --alsologtostderr: (1.180091218s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-694790 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.539343162s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-694790
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-694790 image save --daemon gcr.io/google-containers/addon-resizer:functional-694790 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-694790 image save --daemon gcr.io/google-containers/addon-resizer:functional-694790 --alsologtostderr: (1.001716815s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-694790
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-694790
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-694790
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-694790
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (208.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-170194 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0520 13:28:01.807867  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-170194 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m27.548062167s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (208.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-170194 -- rollout status deployment/busybox: (4.214502914s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- exec busybox-fc5497c4f-kn5pb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- exec busybox-fc5497c4f-tmq2s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- exec busybox-fc5497c4f-vr9tf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- exec busybox-fc5497c4f-kn5pb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- exec busybox-fc5497c4f-tmq2s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- exec busybox-fc5497c4f-vr9tf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- exec busybox-fc5497c4f-kn5pb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- exec busybox-fc5497c4f-tmq2s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- exec busybox-fc5497c4f-vr9tf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- exec busybox-fc5497c4f-kn5pb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- exec busybox-fc5497c4f-kn5pb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- exec busybox-fc5497c4f-tmq2s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- exec busybox-fc5497c4f-tmq2s -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- exec busybox-fc5497c4f-vr9tf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-170194 -- exec busybox-fc5497c4f-vr9tf -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-170194 -v=7 --alsologtostderr
E0520 13:31:59.760751  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 13:31:59.766092  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 13:31:59.776390  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 13:31:59.796724  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 13:31:59.837114  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 13:31:59.917473  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 13:32:00.077722  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 13:32:00.398422  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 13:32:01.038981  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 13:32:02.319484  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 13:32:04.880015  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 13:32:10.000616  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-170194 -v=7 --alsologtostderr: (43.606705407s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-170194 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp testdata/cp-test.txt ha-170194:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp ha-170194:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1524472985/001/cp-test_ha-170194.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp ha-170194:/home/docker/cp-test.txt ha-170194-m02:/home/docker/cp-test_ha-170194_ha-170194-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m02 "sudo cat /home/docker/cp-test_ha-170194_ha-170194-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp ha-170194:/home/docker/cp-test.txt ha-170194-m03:/home/docker/cp-test_ha-170194_ha-170194-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m03 "sudo cat /home/docker/cp-test_ha-170194_ha-170194-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp ha-170194:/home/docker/cp-test.txt ha-170194-m04:/home/docker/cp-test_ha-170194_ha-170194-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m04 "sudo cat /home/docker/cp-test_ha-170194_ha-170194-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp testdata/cp-test.txt ha-170194-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp ha-170194-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1524472985/001/cp-test_ha-170194-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m02 "sudo cat /home/docker/cp-test.txt"
E0520 13:32:20.241812  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp ha-170194-m02:/home/docker/cp-test.txt ha-170194:/home/docker/cp-test_ha-170194-m02_ha-170194.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194 "sudo cat /home/docker/cp-test_ha-170194-m02_ha-170194.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp ha-170194-m02:/home/docker/cp-test.txt ha-170194-m03:/home/docker/cp-test_ha-170194-m02_ha-170194-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m03 "sudo cat /home/docker/cp-test_ha-170194-m02_ha-170194-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp ha-170194-m02:/home/docker/cp-test.txt ha-170194-m04:/home/docker/cp-test_ha-170194-m02_ha-170194-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m04 "sudo cat /home/docker/cp-test_ha-170194-m02_ha-170194-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp testdata/cp-test.txt ha-170194-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp ha-170194-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1524472985/001/cp-test_ha-170194-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp ha-170194-m03:/home/docker/cp-test.txt ha-170194:/home/docker/cp-test_ha-170194-m03_ha-170194.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194 "sudo cat /home/docker/cp-test_ha-170194-m03_ha-170194.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp ha-170194-m03:/home/docker/cp-test.txt ha-170194-m02:/home/docker/cp-test_ha-170194-m03_ha-170194-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m02 "sudo cat /home/docker/cp-test_ha-170194-m03_ha-170194-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp ha-170194-m03:/home/docker/cp-test.txt ha-170194-m04:/home/docker/cp-test_ha-170194-m03_ha-170194-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m04 "sudo cat /home/docker/cp-test_ha-170194-m03_ha-170194-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp testdata/cp-test.txt ha-170194-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1524472985/001/cp-test_ha-170194-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt ha-170194:/home/docker/cp-test_ha-170194-m04_ha-170194.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194 "sudo cat /home/docker/cp-test_ha-170194-m04_ha-170194.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt ha-170194-m02:/home/docker/cp-test_ha-170194-m04_ha-170194-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m02 "sudo cat /home/docker/cp-test_ha-170194-m04_ha-170194-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 cp ha-170194-m04:/home/docker/cp-test.txt ha-170194-m03:/home/docker/cp-test_ha-170194-m04_ha-170194-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 ssh -n ha-170194-m03 "sudo cat /home/docker/cp-test_ha-170194-m04_ha-170194-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.468344356s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-170194 node delete m03 -v=7 --alsologtostderr: (16.187872084s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (348.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-170194 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0520 13:46:59.760533  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 13:48:01.807468  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
E0520 13:48:22.805752  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-170194 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m48.051953401s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (348.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-170194 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-170194 --control-plane -v=7 --alsologtostderr: (1m9.540495083s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-170194 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (62.73s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-794869 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0520 13:53:01.807682  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-794869 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.731100559s)
--- PASS: TestJSONOutput/start/Command (62.73s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-794869 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-794869 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-794869 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-794869 --output=json --user=testUser: (7.349286284s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-349876 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-349876 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.883492ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"39a39bc9-6426-49a7-a679-5d9dd1b9aeb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-349876] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed5ccaa3-d64d-47ce-9f60-00ba177507a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18929"}}
	{"specversion":"1.0","id":"5ddc754e-5e38-4f45-989b-86a265968597","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9bf2df73-c76e-40ba-b8f1-5cb5c3b2ce6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig"}}
	{"specversion":"1.0","id":"04fd2fef-af38-4176-9fcc-e2e397229b15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube"}}
	{"specversion":"1.0","id":"d6665b2a-b755-448f-bd44-a4b9fa1cd650","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ea119dca-483a-4520-814d-c3aa8c64d9de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3c106abb-04c3-4680-b629-1dc8bccb5500","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-349876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-349876
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (87.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-394066 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-394066 --driver=kvm2  --container-runtime=crio: (41.85110706s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-397340 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-397340 --driver=kvm2  --container-runtime=crio: (42.849319325s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-394066
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-397340
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-397340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-397340
helpers_test.go:175: Cleaning up "first-394066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-394066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-394066: (1.011047333s)
--- PASS: TestMinikubeProfile (87.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-023931 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-023931 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.051904963s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-023931 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-023931 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-040082 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-040082 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.092841573s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-040082 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-040082 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-023931 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-040082 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-040082 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-040082
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-040082: (1.275639542s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.37s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-040082
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-040082: (19.370041378s)
--- PASS: TestMountStart/serial/RestartStopped (20.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-040082 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-040082 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (96.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-114485 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0520 13:56:59.760584  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-114485 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m35.995279971s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (96.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114485 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114485 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-114485 -- rollout status deployment/busybox: (3.897024163s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114485 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114485 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114485 -- exec busybox-fc5497c4f-p56pn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114485 -- exec busybox-fc5497c4f-w8gjh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114485 -- exec busybox-fc5497c4f-p56pn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114485 -- exec busybox-fc5497c4f-w8gjh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114485 -- exec busybox-fc5497c4f-p56pn -- nslookup kubernetes.default.svc.cluster.local
E0520 13:57:44.855407  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114485 -- exec busybox-fc5497c4f-w8gjh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.35s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114485 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114485 -- exec busybox-fc5497c4f-p56pn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114485 -- exec busybox-fc5497c4f-p56pn -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114485 -- exec busybox-fc5497c4f-w8gjh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114485 -- exec busybox-fc5497c4f-w8gjh -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (38.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-114485 -v 3 --alsologtostderr
E0520 13:58:01.807914  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-114485 -v 3 --alsologtostderr: (37.86884122s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (38.44s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-114485 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 cp testdata/cp-test.txt multinode-114485:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 cp multinode-114485:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile345453774/001/cp-test_multinode-114485.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 cp multinode-114485:/home/docker/cp-test.txt multinode-114485-m02:/home/docker/cp-test_multinode-114485_multinode-114485-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485-m02 "sudo cat /home/docker/cp-test_multinode-114485_multinode-114485-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 cp multinode-114485:/home/docker/cp-test.txt multinode-114485-m03:/home/docker/cp-test_multinode-114485_multinode-114485-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485-m03 "sudo cat /home/docker/cp-test_multinode-114485_multinode-114485-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 cp testdata/cp-test.txt multinode-114485-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 cp multinode-114485-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile345453774/001/cp-test_multinode-114485-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 cp multinode-114485-m02:/home/docker/cp-test.txt multinode-114485:/home/docker/cp-test_multinode-114485-m02_multinode-114485.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485 "sudo cat /home/docker/cp-test_multinode-114485-m02_multinode-114485.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 cp multinode-114485-m02:/home/docker/cp-test.txt multinode-114485-m03:/home/docker/cp-test_multinode-114485-m02_multinode-114485-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485-m03 "sudo cat /home/docker/cp-test_multinode-114485-m02_multinode-114485-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 cp testdata/cp-test.txt multinode-114485-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 cp multinode-114485-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile345453774/001/cp-test_multinode-114485-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 cp multinode-114485-m03:/home/docker/cp-test.txt multinode-114485:/home/docker/cp-test_multinode-114485-m03_multinode-114485.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485 "sudo cat /home/docker/cp-test_multinode-114485-m03_multinode-114485.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 cp multinode-114485-m03:/home/docker/cp-test.txt multinode-114485-m02:/home/docker/cp-test_multinode-114485-m03_multinode-114485-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 ssh -n multinode-114485-m02 "sudo cat /home/docker/cp-test_multinode-114485-m03_multinode-114485-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-114485 node stop m03: (1.399687132s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-114485 status: exit status 7 (431.733721ms)

                                                
                                                
-- stdout --
	multinode-114485
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-114485-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-114485-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-114485 status --alsologtostderr: exit status 7 (434.530032ms)

                                                
                                                
-- stdout --
	multinode-114485
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-114485-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-114485-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:58:33.705817  641190 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:58:33.706083  641190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:58:33.706094  641190 out.go:304] Setting ErrFile to fd 2...
	I0520 13:58:33.706099  641190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:58:33.706327  641190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18929-602525/.minikube/bin
	I0520 13:58:33.706505  641190 out.go:298] Setting JSON to false
	I0520 13:58:33.706532  641190 mustload.go:65] Loading cluster: multinode-114485
	I0520 13:58:33.706577  641190 notify.go:220] Checking for updates...
	I0520 13:58:33.707048  641190 config.go:182] Loaded profile config "multinode-114485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:58:33.707081  641190 status.go:255] checking status of multinode-114485 ...
	I0520 13:58:33.707563  641190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:58:33.707627  641190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:58:33.726324  641190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38567
	I0520 13:58:33.726778  641190 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:58:33.727450  641190 main.go:141] libmachine: Using API Version  1
	I0520 13:58:33.727479  641190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:58:33.727918  641190 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:58:33.728173  641190 main.go:141] libmachine: (multinode-114485) Calling .GetState
	I0520 13:58:33.729865  641190 status.go:330] multinode-114485 host status = "Running" (err=<nil>)
	I0520 13:58:33.729880  641190 host.go:66] Checking if "multinode-114485" exists ...
	I0520 13:58:33.730165  641190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:58:33.730199  641190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:58:33.745224  641190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0520 13:58:33.745707  641190 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:58:33.746205  641190 main.go:141] libmachine: Using API Version  1
	I0520 13:58:33.746229  641190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:58:33.746551  641190 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:58:33.746764  641190 main.go:141] libmachine: (multinode-114485) Calling .GetIP
	I0520 13:58:33.749984  641190 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 13:58:33.750498  641190 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 13:58:33.750534  641190 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 13:58:33.750654  641190 host.go:66] Checking if "multinode-114485" exists ...
	I0520 13:58:33.750988  641190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:58:33.751033  641190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:58:33.775682  641190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42025
	I0520 13:58:33.776643  641190 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:58:33.777341  641190 main.go:141] libmachine: Using API Version  1
	I0520 13:58:33.777364  641190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:58:33.777705  641190 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:58:33.777933  641190 main.go:141] libmachine: (multinode-114485) Calling .DriverName
	I0520 13:58:33.778148  641190 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:58:33.778184  641190 main.go:141] libmachine: (multinode-114485) Calling .GetSSHHostname
	I0520 13:58:33.781439  641190 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 13:58:33.781930  641190 main.go:141] libmachine: (multinode-114485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:a0:d9", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:56:17 +0000 UTC Type:0 Mac:52:54:00:78:a0:d9 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-114485 Clientid:01:52:54:00:78:a0:d9}
	I0520 13:58:33.781957  641190 main.go:141] libmachine: (multinode-114485) DBG | domain multinode-114485 has defined IP address 192.168.39.141 and MAC address 52:54:00:78:a0:d9 in network mk-multinode-114485
	I0520 13:58:33.782137  641190 main.go:141] libmachine: (multinode-114485) Calling .GetSSHPort
	I0520 13:58:33.782324  641190 main.go:141] libmachine: (multinode-114485) Calling .GetSSHKeyPath
	I0520 13:58:33.782496  641190 main.go:141] libmachine: (multinode-114485) Calling .GetSSHUsername
	I0520 13:58:33.782693  641190 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/multinode-114485/id_rsa Username:docker}
	I0520 13:58:33.868244  641190 ssh_runner.go:195] Run: systemctl --version
	I0520 13:58:33.874076  641190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:58:33.888861  641190 kubeconfig.go:125] found "multinode-114485" server: "https://192.168.39.141:8443"
	I0520 13:58:33.888894  641190 api_server.go:166] Checking apiserver status ...
	I0520 13:58:33.888930  641190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:58:33.902752  641190 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W0520 13:58:33.911880  641190 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:58:33.911931  641190 ssh_runner.go:195] Run: ls
	I0520 13:58:33.915968  641190 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I0520 13:58:33.920412  641190 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I0520 13:58:33.920433  641190 status.go:422] multinode-114485 apiserver status = Running (err=<nil>)
	I0520 13:58:33.920443  641190 status.go:257] multinode-114485 status: &{Name:multinode-114485 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:58:33.920460  641190 status.go:255] checking status of multinode-114485-m02 ...
	I0520 13:58:33.920763  641190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:58:33.920801  641190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:58:33.937065  641190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35013
	I0520 13:58:33.937564  641190 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:58:33.938105  641190 main.go:141] libmachine: Using API Version  1
	I0520 13:58:33.938133  641190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:58:33.938480  641190 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:58:33.938686  641190 main.go:141] libmachine: (multinode-114485-m02) Calling .GetState
	I0520 13:58:33.940302  641190 status.go:330] multinode-114485-m02 host status = "Running" (err=<nil>)
	I0520 13:58:33.940318  641190 host.go:66] Checking if "multinode-114485-m02" exists ...
	I0520 13:58:33.940591  641190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:58:33.940632  641190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:58:33.955987  641190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I0520 13:58:33.956475  641190 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:58:33.957025  641190 main.go:141] libmachine: Using API Version  1
	I0520 13:58:33.957050  641190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:58:33.957411  641190 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:58:33.957634  641190 main.go:141] libmachine: (multinode-114485-m02) Calling .GetIP
	I0520 13:58:33.961176  641190 main.go:141] libmachine: (multinode-114485-m02) DBG | domain multinode-114485-m02 has defined MAC address 52:54:00:aa:2c:61 in network mk-multinode-114485
	I0520 13:58:33.961636  641190 main.go:141] libmachine: (multinode-114485-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:2c:61", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:57:14 +0000 UTC Type:0 Mac:52:54:00:aa:2c:61 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-114485-m02 Clientid:01:52:54:00:aa:2c:61}
	I0520 13:58:33.961666  641190 main.go:141] libmachine: (multinode-114485-m02) DBG | domain multinode-114485-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:aa:2c:61 in network mk-multinode-114485
	I0520 13:58:33.961819  641190 host.go:66] Checking if "multinode-114485-m02" exists ...
	I0520 13:58:33.962147  641190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:58:33.962188  641190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:58:33.977539  641190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38201
	I0520 13:58:33.978074  641190 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:58:33.978497  641190 main.go:141] libmachine: Using API Version  1
	I0520 13:58:33.978517  641190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:58:33.978825  641190 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:58:33.979030  641190 main.go:141] libmachine: (multinode-114485-m02) Calling .DriverName
	I0520 13:58:33.979200  641190 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:58:33.979225  641190 main.go:141] libmachine: (multinode-114485-m02) Calling .GetSSHHostname
	I0520 13:58:33.982562  641190 main.go:141] libmachine: (multinode-114485-m02) DBG | domain multinode-114485-m02 has defined MAC address 52:54:00:aa:2c:61 in network mk-multinode-114485
	I0520 13:58:33.982833  641190 main.go:141] libmachine: (multinode-114485-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:2c:61", ip: ""} in network mk-multinode-114485: {Iface:virbr1 ExpiryTime:2024-05-20 14:57:14 +0000 UTC Type:0 Mac:52:54:00:aa:2c:61 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-114485-m02 Clientid:01:52:54:00:aa:2c:61}
	I0520 13:58:33.982854  641190 main.go:141] libmachine: (multinode-114485-m02) DBG | domain multinode-114485-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:aa:2c:61 in network mk-multinode-114485
	I0520 13:58:33.983035  641190 main.go:141] libmachine: (multinode-114485-m02) Calling .GetSSHPort
	I0520 13:58:33.983247  641190 main.go:141] libmachine: (multinode-114485-m02) Calling .GetSSHKeyPath
	I0520 13:58:33.983412  641190 main.go:141] libmachine: (multinode-114485-m02) Calling .GetSSHUsername
	I0520 13:58:33.983538  641190 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18929-602525/.minikube/machines/multinode-114485-m02/id_rsa Username:docker}
	I0520 13:58:34.064141  641190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:58:34.078445  641190 status.go:257] multinode-114485-m02 status: &{Name:multinode-114485-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:58:34.078499  641190 status.go:255] checking status of multinode-114485-m03 ...
	I0520 13:58:34.078831  641190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:58:34.078886  641190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:58:34.094999  641190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0520 13:58:34.095387  641190 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:58:34.095794  641190 main.go:141] libmachine: Using API Version  1
	I0520 13:58:34.095816  641190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:58:34.096128  641190 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:58:34.096287  641190 main.go:141] libmachine: (multinode-114485-m03) Calling .GetState
	I0520 13:58:34.097806  641190 status.go:330] multinode-114485-m03 host status = "Stopped" (err=<nil>)
	I0520 13:58:34.097822  641190 status.go:343] host is not running, skipping remaining checks
	I0520 13:58:34.097828  641190 status.go:257] multinode-114485-m03 status: &{Name:multinode-114485-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-114485 node start m03 -v=7 --alsologtostderr: (28.402179678s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-114485 node delete m03: (1.942024285s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (191.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-114485 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0520 14:06:59.762444  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
E0520 14:08:01.807588  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-114485 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m10.753027116s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114485 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (191.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-114485
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-114485-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-114485-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (73.080267ms)

                                                
                                                
-- stdout --
	* [multinode-114485-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18929
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-114485-m02' is duplicated with machine name 'multinode-114485-m02' in profile 'multinode-114485'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-114485-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-114485-m03 --driver=kvm2  --container-runtime=crio: (46.994051768s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-114485
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-114485: exit status 80 (217.581827ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-114485 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-114485-m03 already exists in multinode-114485-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-114485-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.30s)

                                                
                                    
x
+
TestScheduledStopUnix (112.62s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-121216 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-121216 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.029629082s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-121216 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-121216 -n scheduled-stop-121216
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-121216 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-121216 --cancel-scheduled
E0520 14:14:24.856398  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-121216 -n scheduled-stop-121216
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-121216
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-121216 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-121216
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-121216: exit status 7 (67.979029ms)

                                                
                                                
-- stdout --
	scheduled-stop-121216
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-121216 -n scheduled-stop-121216
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-121216 -n scheduled-stop-121216: exit status 7 (65.613775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-121216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-121216
--- PASS: TestScheduledStopUnix (112.62s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (211.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2290091642 start -p running-upgrade-016464 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2290091642 start -p running-upgrade-016464 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m56.921065225s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-016464 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-016464 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m30.050480517s)
helpers_test.go:175: Cleaning up "running-upgrade-016464" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-016464
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-016464: (1.227794565s)
--- PASS: TestRunningBinaryUpgrade (211.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-903699 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-903699 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (94.283994ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-903699] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18929
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18929-602525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18929-602525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (87.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-903699 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-903699 --driver=kvm2  --container-runtime=crio: (1m27.706129362s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-903699 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (87.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (152.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4159273314 start -p stopped-upgrade-944015 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4159273314 start -p stopped-upgrade-944015 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m40.631826569s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4159273314 -p stopped-upgrade-944015 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4159273314 -p stopped-upgrade-944015 stop: (1.436966521s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-944015 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-944015 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.179273167s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (152.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (65.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-903699 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0520 14:16:59.760132  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/functional-694790/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-903699 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m4.628244106s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-903699 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-903699 status -o json: exit status 2 (240.684362ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-903699","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-903699
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (65.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-903699 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0520 14:18:01.807652  609867 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18929-602525/.minikube/profiles/addons-840762/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-903699 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.937929061s)
--- PASS: TestNoKubernetes/serial/Start (28.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-903699 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-903699 "sudo systemctl is-active --quiet service kubelet": exit status 1 (217.962686ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (27.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.161494472s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (11.891421713s)
--- PASS: TestNoKubernetes/serial/ProfileList (27.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-903699
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-903699: (1.591275846s)
--- PASS: TestNoKubernetes/serial/Stop (1.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (41.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-903699 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-903699 --driver=kvm2  --container-runtime=crio: (41.454931066s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (41.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-944015
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-903699 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-903699 "sudo systemctl is-active --quiet service kubelet": exit status 1 (207.03947ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestPause/serial/Start (72.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-462644 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-462644 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m12.909182772s)
--- PASS: TestPause/serial/Start (72.91s)

                                                
                                    

Test skip (33/221)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard